Test Report: KVM_Linux_crio 19338

                    
                      0eb0b855c9cd12df3081fe3f67aa770440dcda12:2024-07-29:35550
                    
                

Test fail (30/320)

Order failed test Duration
43 TestAddons/parallel/Ingress 152.53
45 TestAddons/parallel/MetricsServer 344.24
54 TestAddons/StoppedEnableDisable 154.45
173 TestMultiControlPlane/serial/StopSecondaryNode 141.98
175 TestMultiControlPlane/serial/RestartSecondaryNode 50.15
177 TestMultiControlPlane/serial/RestartClusterKeepsNodes 408.44
180 TestMultiControlPlane/serial/StopCluster 141.74
240 TestMultiNode/serial/RestartKeepsNodes 320.44
242 TestMultiNode/serial/StopMultiNode 141.29
249 TestPreload 277.42
257 TestKubernetesUpgrade 464.32
292 TestPause/serial/SecondStartNoReconfiguration 53.1
328 TestStartStop/group/old-k8s-version/serial/FirstStart 272.45
348 TestStartStop/group/no-preload/serial/Stop 139.3
351 TestStartStop/group/embed-certs/serial/Stop 138.9
354 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.1
355 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
357 TestStartStop/group/old-k8s-version/serial/DeployApp 0.47
358 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 107.22
359 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
361 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
365 TestStartStop/group/old-k8s-version/serial/SecondStart 707.63
366 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.19
367 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.23
368 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.25
369 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.43
370 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 474.79
371 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 413.7
372 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 315.69
373 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 169.83
x
+
TestAddons/parallel/Ingress (152.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-881745 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-881745 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-881745 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [403be47e-4fc8-4b4a-92f4-57da8aa66907] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [403be47e-4fc8-4b4a-92f4-57da8aa66907] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004241504s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-881745 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-881745 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.657252829s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-881745 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-881745 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.103
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-881745 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-881745 addons disable ingress-dns --alsologtostderr -v=1: (1.336576334s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-881745 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-881745 addons disable ingress --alsologtostderr -v=1: (7.695640979s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-881745 -n addons-881745
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-881745 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-881745 logs -n 25: (1.176397242s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-336101                                                                     | download-only-336101 | jenkins | v1.33.1 | 29 Jul 24 13:12 UTC | 29 Jul 24 13:12 UTC |
	| delete  | -p download-only-960541                                                                     | download-only-960541 | jenkins | v1.33.1 | 29 Jul 24 13:12 UTC | 29 Jul 24 13:12 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-696925 | jenkins | v1.33.1 | 29 Jul 24 13:12 UTC |                     |
	|         | binary-mirror-696925                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:33915                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-696925                                                                     | binary-mirror-696925 | jenkins | v1.33.1 | 29 Jul 24 13:12 UTC | 29 Jul 24 13:12 UTC |
	| addons  | enable dashboard -p                                                                         | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:12 UTC |                     |
	|         | addons-881745                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:12 UTC |                     |
	|         | addons-881745                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-881745 --wait=true                                                                | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:12 UTC | 29 Jul 24 13:14 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-881745 addons disable                                                                | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:14 UTC | 29 Jul 24 13:14 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-881745 ssh cat                                                                       | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:15 UTC | 29 Jul 24 13:15 UTC |
	|         | /opt/local-path-provisioner/pvc-2f86f84c-0179-4267-9abb-37e36ba02c83_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-881745 addons disable                                                                | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:15 UTC | 29 Jul 24 13:15 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-881745 ip                                                                            | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:15 UTC | 29 Jul 24 13:15 UTC |
	| addons  | addons-881745 addons disable                                                                | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:15 UTC | 29 Jul 24 13:15 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:15 UTC | 29 Jul 24 13:15 UTC |
	|         | addons-881745                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-881745 ssh curl -s                                                                   | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:15 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-881745 addons                                                                        | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:15 UTC | 29 Jul 24 13:15 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:15 UTC | 29 Jul 24 13:15 UTC |
	|         | addons-881745                                                                               |                      |         |         |                     |                     |
	| addons  | addons-881745 addons                                                                        | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:15 UTC | 29 Jul 24 13:15 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:15 UTC | 29 Jul 24 13:15 UTC |
	|         | -p addons-881745                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-881745 addons disable                                                                | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:16 UTC | 29 Jul 24 13:16 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-881745 addons disable                                                                | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:16 UTC | 29 Jul 24 13:16 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-881745 addons disable                                                                | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:16 UTC | 29 Jul 24 13:16 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:16 UTC | 29 Jul 24 13:16 UTC |
	|         | -p addons-881745                                                                            |                      |         |         |                     |                     |
	| ip      | addons-881745 ip                                                                            | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:17 UTC | 29 Jul 24 13:17 UTC |
	| addons  | addons-881745 addons disable                                                                | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:17 UTC | 29 Jul 24 13:17 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-881745 addons disable                                                                | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:17 UTC | 29 Jul 24 13:17 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 13:12:11
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 13:12:11.880480  982934 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:12:11.880621  982934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:12:11.880631  982934 out.go:304] Setting ErrFile to fd 2...
	I0729 13:12:11.880635  982934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:12:11.880835  982934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 13:12:11.881440  982934 out.go:298] Setting JSON to false
	I0729 13:12:11.882512  982934 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10484,"bootTime":1722248248,"procs":388,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:12:11.882577  982934 start.go:139] virtualization: kvm guest
	I0729 13:12:11.884635  982934 out.go:177] * [addons-881745] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:12:11.885929  982934 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 13:12:11.886010  982934 notify.go:220] Checking for updates...
	I0729 13:12:11.888451  982934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:12:11.889593  982934 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 13:12:11.890718  982934 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 13:12:11.891916  982934 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:12:11.893247  982934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:12:11.894499  982934 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:12:11.925062  982934 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 13:12:11.926166  982934 start.go:297] selected driver: kvm2
	I0729 13:12:11.926189  982934 start.go:901] validating driver "kvm2" against <nil>
	I0729 13:12:11.926205  982934 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:12:11.926915  982934 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:12:11.927004  982934 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19338-974764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 13:12:11.941706  982934 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 13:12:11.941748  982934 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 13:12:11.941939  982934 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:12:11.941963  982934 cni.go:84] Creating CNI manager for ""
	I0729 13:12:11.941969  982934 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:12:11.941975  982934 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 13:12:11.942024  982934 start.go:340] cluster config:
	{Name:addons-881745 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-881745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:12:11.942115  982934 iso.go:125] acquiring lock: {Name:mk2bc72146110e230952d77b90cad2ea8182c9d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:12:11.943791  982934 out.go:177] * Starting "addons-881745" primary control-plane node in "addons-881745" cluster
	I0729 13:12:11.944861  982934 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:12:11.944886  982934 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 13:12:11.944896  982934 cache.go:56] Caching tarball of preloaded images
	I0729 13:12:11.944970  982934 preload.go:172] Found /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:12:11.944989  982934 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 13:12:11.945264  982934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/config.json ...
	I0729 13:12:11.945289  982934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/config.json: {Name:mk1324dba7512e03a30b119e27dd3470d567c772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:12:11.945415  982934 start.go:360] acquireMachinesLock for addons-881745: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:12:11.945457  982934 start.go:364] duration metric: took 30.275µs to acquireMachinesLock for "addons-881745"
	I0729 13:12:11.945475  982934 start.go:93] Provisioning new machine with config: &{Name:addons-881745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-881745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:12:11.945526  982934 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 13:12:11.946915  982934 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0729 13:12:11.947026  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:12:11.947059  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:12:11.960691  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41029
	I0729 13:12:11.961091  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:12:11.961698  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:12:11.961720  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:12:11.962039  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:12:11.962224  982934 main.go:141] libmachine: (addons-881745) Calling .GetMachineName
	I0729 13:12:11.962339  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:12:11.962476  982934 start.go:159] libmachine.API.Create for "addons-881745" (driver="kvm2")
	I0729 13:12:11.962507  982934 client.go:168] LocalClient.Create starting
	I0729 13:12:11.962545  982934 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem
	I0729 13:12:12.155740  982934 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem
	I0729 13:12:12.217284  982934 main.go:141] libmachine: Running pre-create checks...
	I0729 13:12:12.217308  982934 main.go:141] libmachine: (addons-881745) Calling .PreCreateCheck
	I0729 13:12:12.217832  982934 main.go:141] libmachine: (addons-881745) Calling .GetConfigRaw
	I0729 13:12:12.218300  982934 main.go:141] libmachine: Creating machine...
	I0729 13:12:12.218314  982934 main.go:141] libmachine: (addons-881745) Calling .Create
	I0729 13:12:12.218428  982934 main.go:141] libmachine: (addons-881745) Creating KVM machine...
	I0729 13:12:12.219688  982934 main.go:141] libmachine: (addons-881745) DBG | found existing default KVM network
	I0729 13:12:12.220401  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:12.220253  982956 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0729 13:12:12.220442  982934 main.go:141] libmachine: (addons-881745) DBG | created network xml: 
	I0729 13:12:12.220455  982934 main.go:141] libmachine: (addons-881745) DBG | <network>
	I0729 13:12:12.220463  982934 main.go:141] libmachine: (addons-881745) DBG |   <name>mk-addons-881745</name>
	I0729 13:12:12.220484  982934 main.go:141] libmachine: (addons-881745) DBG |   <dns enable='no'/>
	I0729 13:12:12.220495  982934 main.go:141] libmachine: (addons-881745) DBG |   
	I0729 13:12:12.220513  982934 main.go:141] libmachine: (addons-881745) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 13:12:12.220522  982934 main.go:141] libmachine: (addons-881745) DBG |     <dhcp>
	I0729 13:12:12.220579  982934 main.go:141] libmachine: (addons-881745) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 13:12:12.220598  982934 main.go:141] libmachine: (addons-881745) DBG |     </dhcp>
	I0729 13:12:12.220605  982934 main.go:141] libmachine: (addons-881745) DBG |   </ip>
	I0729 13:12:12.220609  982934 main.go:141] libmachine: (addons-881745) DBG |   
	I0729 13:12:12.220617  982934 main.go:141] libmachine: (addons-881745) DBG | </network>
	I0729 13:12:12.220622  982934 main.go:141] libmachine: (addons-881745) DBG | 
	I0729 13:12:12.225615  982934 main.go:141] libmachine: (addons-881745) DBG | trying to create private KVM network mk-addons-881745 192.168.39.0/24...
	I0729 13:12:12.291322  982934 main.go:141] libmachine: (addons-881745) DBG | private KVM network mk-addons-881745 192.168.39.0/24 created
	I0729 13:12:12.291380  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:12.291272  982956 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 13:12:12.291400  982934 main.go:141] libmachine: (addons-881745) Setting up store path in /home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745 ...
	I0729 13:12:12.291427  982934 main.go:141] libmachine: (addons-881745) Building disk image from file:///home/jenkins/minikube-integration/19338-974764/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 13:12:12.291477  982934 main.go:141] libmachine: (addons-881745) Downloading /home/jenkins/minikube-integration/19338-974764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19338-974764/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 13:12:12.553094  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:12.552975  982956 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa...
	I0729 13:12:13.107412  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:13.107281  982956 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/addons-881745.rawdisk...
	I0729 13:12:13.107446  982934 main.go:141] libmachine: (addons-881745) DBG | Writing magic tar header
	I0729 13:12:13.107460  982934 main.go:141] libmachine: (addons-881745) DBG | Writing SSH key tar header
	I0729 13:12:13.107472  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:13.107389  982956 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745 ...
	I0729 13:12:13.107487  982934 main.go:141] libmachine: (addons-881745) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745
	I0729 13:12:13.107496  982934 main.go:141] libmachine: (addons-881745) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube/machines
	I0729 13:12:13.107505  982934 main.go:141] libmachine: (addons-881745) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 13:12:13.107511  982934 main.go:141] libmachine: (addons-881745) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764
	I0729 13:12:13.107520  982934 main.go:141] libmachine: (addons-881745) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 13:12:13.107545  982934 main.go:141] libmachine: (addons-881745) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745 (perms=drwx------)
	I0729 13:12:13.107559  982934 main.go:141] libmachine: (addons-881745) DBG | Checking permissions on dir: /home/jenkins
	I0729 13:12:13.107569  982934 main.go:141] libmachine: (addons-881745) DBG | Checking permissions on dir: /home
	I0729 13:12:13.107574  982934 main.go:141] libmachine: (addons-881745) DBG | Skipping /home - not owner
	I0729 13:12:13.107584  982934 main.go:141] libmachine: (addons-881745) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube/machines (perms=drwxr-xr-x)
	I0729 13:12:13.107592  982934 main.go:141] libmachine: (addons-881745) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube (perms=drwxr-xr-x)
	I0729 13:12:13.107608  982934 main.go:141] libmachine: (addons-881745) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764 (perms=drwxrwxr-x)
	I0729 13:12:13.107624  982934 main.go:141] libmachine: (addons-881745) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 13:12:13.107639  982934 main.go:141] libmachine: (addons-881745) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 13:12:13.107648  982934 main.go:141] libmachine: (addons-881745) Creating domain...
	I0729 13:12:13.108771  982934 main.go:141] libmachine: (addons-881745) define libvirt domain using xml: 
	I0729 13:12:13.108802  982934 main.go:141] libmachine: (addons-881745) <domain type='kvm'>
	I0729 13:12:13.108810  982934 main.go:141] libmachine: (addons-881745)   <name>addons-881745</name>
	I0729 13:12:13.108815  982934 main.go:141] libmachine: (addons-881745)   <memory unit='MiB'>4000</memory>
	I0729 13:12:13.108821  982934 main.go:141] libmachine: (addons-881745)   <vcpu>2</vcpu>
	I0729 13:12:13.108827  982934 main.go:141] libmachine: (addons-881745)   <features>
	I0729 13:12:13.108834  982934 main.go:141] libmachine: (addons-881745)     <acpi/>
	I0729 13:12:13.108840  982934 main.go:141] libmachine: (addons-881745)     <apic/>
	I0729 13:12:13.108845  982934 main.go:141] libmachine: (addons-881745)     <pae/>
	I0729 13:12:13.108854  982934 main.go:141] libmachine: (addons-881745)     
	I0729 13:12:13.108861  982934 main.go:141] libmachine: (addons-881745)   </features>
	I0729 13:12:13.108871  982934 main.go:141] libmachine: (addons-881745)   <cpu mode='host-passthrough'>
	I0729 13:12:13.108878  982934 main.go:141] libmachine: (addons-881745)   
	I0729 13:12:13.108893  982934 main.go:141] libmachine: (addons-881745)   </cpu>
	I0729 13:12:13.108900  982934 main.go:141] libmachine: (addons-881745)   <os>
	I0729 13:12:13.108905  982934 main.go:141] libmachine: (addons-881745)     <type>hvm</type>
	I0729 13:12:13.108912  982934 main.go:141] libmachine: (addons-881745)     <boot dev='cdrom'/>
	I0729 13:12:13.108917  982934 main.go:141] libmachine: (addons-881745)     <boot dev='hd'/>
	I0729 13:12:13.108935  982934 main.go:141] libmachine: (addons-881745)     <bootmenu enable='no'/>
	I0729 13:12:13.108941  982934 main.go:141] libmachine: (addons-881745)   </os>
	I0729 13:12:13.108946  982934 main.go:141] libmachine: (addons-881745)   <devices>
	I0729 13:12:13.108955  982934 main.go:141] libmachine: (addons-881745)     <disk type='file' device='cdrom'>
	I0729 13:12:13.108968  982934 main.go:141] libmachine: (addons-881745)       <source file='/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/boot2docker.iso'/>
	I0729 13:12:13.108982  982934 main.go:141] libmachine: (addons-881745)       <target dev='hdc' bus='scsi'/>
	I0729 13:12:13.108989  982934 main.go:141] libmachine: (addons-881745)       <readonly/>
	I0729 13:12:13.108998  982934 main.go:141] libmachine: (addons-881745)     </disk>
	I0729 13:12:13.109006  982934 main.go:141] libmachine: (addons-881745)     <disk type='file' device='disk'>
	I0729 13:12:13.109017  982934 main.go:141] libmachine: (addons-881745)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 13:12:13.109027  982934 main.go:141] libmachine: (addons-881745)       <source file='/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/addons-881745.rawdisk'/>
	I0729 13:12:13.109034  982934 main.go:141] libmachine: (addons-881745)       <target dev='hda' bus='virtio'/>
	I0729 13:12:13.109039  982934 main.go:141] libmachine: (addons-881745)     </disk>
	I0729 13:12:13.109046  982934 main.go:141] libmachine: (addons-881745)     <interface type='network'>
	I0729 13:12:13.109051  982934 main.go:141] libmachine: (addons-881745)       <source network='mk-addons-881745'/>
	I0729 13:12:13.109058  982934 main.go:141] libmachine: (addons-881745)       <model type='virtio'/>
	I0729 13:12:13.109063  982934 main.go:141] libmachine: (addons-881745)     </interface>
	I0729 13:12:13.109069  982934 main.go:141] libmachine: (addons-881745)     <interface type='network'>
	I0729 13:12:13.109075  982934 main.go:141] libmachine: (addons-881745)       <source network='default'/>
	I0729 13:12:13.109082  982934 main.go:141] libmachine: (addons-881745)       <model type='virtio'/>
	I0729 13:12:13.109087  982934 main.go:141] libmachine: (addons-881745)     </interface>
	I0729 13:12:13.109096  982934 main.go:141] libmachine: (addons-881745)     <serial type='pty'>
	I0729 13:12:13.109140  982934 main.go:141] libmachine: (addons-881745)       <target port='0'/>
	I0729 13:12:13.109167  982934 main.go:141] libmachine: (addons-881745)     </serial>
	I0729 13:12:13.109183  982934 main.go:141] libmachine: (addons-881745)     <console type='pty'>
	I0729 13:12:13.109196  982934 main.go:141] libmachine: (addons-881745)       <target type='serial' port='0'/>
	I0729 13:12:13.109206  982934 main.go:141] libmachine: (addons-881745)     </console>
	I0729 13:12:13.109220  982934 main.go:141] libmachine: (addons-881745)     <rng model='virtio'>
	I0729 13:12:13.109231  982934 main.go:141] libmachine: (addons-881745)       <backend model='random'>/dev/random</backend>
	I0729 13:12:13.109239  982934 main.go:141] libmachine: (addons-881745)     </rng>
	I0729 13:12:13.109248  982934 main.go:141] libmachine: (addons-881745)     
	I0729 13:12:13.109258  982934 main.go:141] libmachine: (addons-881745)     
	I0729 13:12:13.109267  982934 main.go:141] libmachine: (addons-881745)   </devices>
	I0729 13:12:13.109286  982934 main.go:141] libmachine: (addons-881745) </domain>
	I0729 13:12:13.109301  982934 main.go:141] libmachine: (addons-881745) 
	I0729 13:12:13.113640  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:04:d4:65 in network default
	I0729 13:12:13.114178  982934 main.go:141] libmachine: (addons-881745) Ensuring networks are active...
	I0729 13:12:13.114191  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:13.114796  982934 main.go:141] libmachine: (addons-881745) Ensuring network default is active
	I0729 13:12:13.115057  982934 main.go:141] libmachine: (addons-881745) Ensuring network mk-addons-881745 is active
	I0729 13:12:13.115461  982934 main.go:141] libmachine: (addons-881745) Getting domain xml...
	I0729 13:12:13.116055  982934 main.go:141] libmachine: (addons-881745) Creating domain...
	I0729 13:12:13.421832  982934 main.go:141] libmachine: (addons-881745) Waiting to get IP...
	I0729 13:12:13.422554  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:13.422923  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:13.422947  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:13.422897  982956 retry.go:31] will retry after 305.464892ms: waiting for machine to come up
	I0729 13:12:13.730394  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:13.730791  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:13.730818  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:13.730759  982956 retry.go:31] will retry after 308.538344ms: waiting for machine to come up
	I0729 13:12:14.041274  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:14.041705  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:14.041737  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:14.041651  982956 retry.go:31] will retry after 391.302482ms: waiting for machine to come up
	I0729 13:12:14.434132  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:14.434536  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:14.434564  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:14.434483  982956 retry.go:31] will retry after 382.183876ms: waiting for machine to come up
	I0729 13:12:14.818073  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:14.818460  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:14.818481  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:14.818441  982956 retry.go:31] will retry after 660.554898ms: waiting for machine to come up
	I0729 13:12:15.480166  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:15.480597  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:15.480621  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:15.480551  982956 retry.go:31] will retry after 773.489083ms: waiting for machine to come up
	I0729 13:12:16.255591  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:16.256055  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:16.256081  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:16.256000  982956 retry.go:31] will retry after 721.534344ms: waiting for machine to come up
	I0729 13:12:16.979414  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:16.979768  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:16.979802  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:16.979713  982956 retry.go:31] will retry after 1.407916984s: waiting for machine to come up
	I0729 13:12:18.389344  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:18.389777  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:18.389813  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:18.389722  982956 retry.go:31] will retry after 1.620156831s: waiting for machine to come up
	I0729 13:12:20.012437  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:20.012796  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:20.012823  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:20.012743  982956 retry.go:31] will retry after 2.309026893s: waiting for machine to come up
	I0729 13:12:22.323813  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:22.324243  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:22.324300  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:22.324213  982956 retry.go:31] will retry after 1.883250908s: waiting for machine to come up
	I0729 13:12:24.210258  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:24.210712  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:24.210740  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:24.210654  982956 retry.go:31] will retry after 3.187634723s: waiting for machine to come up
	I0729 13:12:27.399311  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:27.399726  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:27.399752  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:27.399655  982956 retry.go:31] will retry after 3.287373845s: waiting for machine to come up
	I0729 13:12:30.689681  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:30.690020  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:30.690048  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:30.689968  982956 retry.go:31] will retry after 5.570376363s: waiting for machine to come up
	I0729 13:12:36.265556  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:36.266015  982934 main.go:141] libmachine: (addons-881745) Found IP for machine: 192.168.39.103
	I0729 13:12:36.266039  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has current primary IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:36.266045  982934 main.go:141] libmachine: (addons-881745) Reserving static IP address...
	I0729 13:12:36.266413  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find host DHCP lease matching {name: "addons-881745", mac: "52:54:00:c8:39:fc", ip: "192.168.39.103"} in network mk-addons-881745
	I0729 13:12:36.415984  982934 main.go:141] libmachine: (addons-881745) DBG | Getting to WaitForSSH function...
	I0729 13:12:36.416024  982934 main.go:141] libmachine: (addons-881745) Reserved static IP address: 192.168.39.103
	I0729 13:12:36.416037  982934 main.go:141] libmachine: (addons-881745) Waiting for SSH to be available...
	I0729 13:12:36.418821  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:36.419246  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:36.419283  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:36.419487  982934 main.go:141] libmachine: (addons-881745) DBG | Using SSH client type: external
	I0729 13:12:36.419511  982934 main.go:141] libmachine: (addons-881745) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa (-rw-------)
	I0729 13:12:36.419540  982934 main.go:141] libmachine: (addons-881745) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.103 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:12:36.419555  982934 main.go:141] libmachine: (addons-881745) DBG | About to run SSH command:
	I0729 13:12:36.419587  982934 main.go:141] libmachine: (addons-881745) DBG | exit 0
	I0729 13:12:36.544357  982934 main.go:141] libmachine: (addons-881745) DBG | SSH cmd err, output: <nil>: 
	I0729 13:12:36.544663  982934 main.go:141] libmachine: (addons-881745) KVM machine creation complete!
	I0729 13:12:36.545034  982934 main.go:141] libmachine: (addons-881745) Calling .GetConfigRaw
	I0729 13:12:36.561927  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:12:36.562173  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:12:36.562371  982934 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 13:12:36.562387  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:12:36.563879  982934 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 13:12:36.563894  982934 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 13:12:36.563900  982934 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 13:12:36.563905  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:12:36.566356  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:36.566717  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:36.566745  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:36.566836  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:12:36.566999  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:36.567159  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:36.567267  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:12:36.567438  982934 main.go:141] libmachine: Using SSH client type: native
	I0729 13:12:36.567655  982934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0729 13:12:36.567665  982934 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 13:12:36.675543  982934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:12:36.675567  982934 main.go:141] libmachine: Detecting the provisioner...
	I0729 13:12:36.675575  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:12:36.678303  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:36.678644  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:36.678669  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:36.678793  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:12:36.679000  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:36.679202  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:36.679384  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:12:36.679560  982934 main.go:141] libmachine: Using SSH client type: native
	I0729 13:12:36.679774  982934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0729 13:12:36.679786  982934 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 13:12:36.789093  982934 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 13:12:36.789153  982934 main.go:141] libmachine: found compatible host: buildroot
	I0729 13:12:36.789160  982934 main.go:141] libmachine: Provisioning with buildroot...
	I0729 13:12:36.789168  982934 main.go:141] libmachine: (addons-881745) Calling .GetMachineName
	I0729 13:12:36.789410  982934 buildroot.go:166] provisioning hostname "addons-881745"
	I0729 13:12:36.789436  982934 main.go:141] libmachine: (addons-881745) Calling .GetMachineName
	I0729 13:12:36.789676  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:12:36.792173  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:36.792523  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:36.792551  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:36.792740  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:12:36.792922  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:36.793102  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:36.793225  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:12:36.793389  982934 main.go:141] libmachine: Using SSH client type: native
	I0729 13:12:36.793581  982934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0729 13:12:36.793594  982934 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-881745 && echo "addons-881745" | sudo tee /etc/hostname
	I0729 13:12:36.914458  982934 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-881745
	
	I0729 13:12:36.914490  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:12:36.917317  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:36.917671  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:36.917698  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:36.917901  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:12:36.918099  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:36.918287  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:36.918396  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:12:36.918537  982934 main.go:141] libmachine: Using SSH client type: native
	I0729 13:12:36.918750  982934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0729 13:12:36.918768  982934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-881745' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-881745/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-881745' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:12:37.032814  982934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:12:37.032857  982934 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 13:12:37.032877  982934 buildroot.go:174] setting up certificates
	I0729 13:12:37.032890  982934 provision.go:84] configureAuth start
	I0729 13:12:37.032899  982934 main.go:141] libmachine: (addons-881745) Calling .GetMachineName
	I0729 13:12:37.033292  982934 main.go:141] libmachine: (addons-881745) Calling .GetIP
	I0729 13:12:37.035722  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.036065  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:37.036101  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.036206  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:12:37.038254  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.038498  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:37.038518  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.038643  982934 provision.go:143] copyHostCerts
	I0729 13:12:37.038704  982934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 13:12:37.038850  982934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 13:12:37.038917  982934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 13:12:37.038970  982934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.addons-881745 san=[127.0.0.1 192.168.39.103 addons-881745 localhost minikube]
	I0729 13:12:37.239500  982934 provision.go:177] copyRemoteCerts
	I0729 13:12:37.239598  982934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:12:37.239649  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:12:37.242198  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.242537  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:37.242577  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.242715  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:12:37.242944  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:37.243107  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:12:37.243255  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:12:37.328164  982934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:12:37.352836  982934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 13:12:37.375487  982934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 13:12:37.398432  982934 provision.go:87] duration metric: took 365.524166ms to configureAuth
	I0729 13:12:37.398468  982934 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:12:37.398654  982934 config.go:182] Loaded profile config "addons-881745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:12:37.398750  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:12:37.401383  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.401715  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:37.401744  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.401922  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:12:37.402126  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:37.402292  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:37.402421  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:12:37.402567  982934 main.go:141] libmachine: Using SSH client type: native
	I0729 13:12:37.402749  982934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0729 13:12:37.402775  982934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:12:37.895943  982934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:12:37.895973  982934 main.go:141] libmachine: Checking connection to Docker...
	I0729 13:12:37.895981  982934 main.go:141] libmachine: (addons-881745) Calling .GetURL
	I0729 13:12:37.897390  982934 main.go:141] libmachine: (addons-881745) DBG | Using libvirt version 6000000
	I0729 13:12:37.899335  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.899629  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:37.899657  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.899793  982934 main.go:141] libmachine: Docker is up and running!
	I0729 13:12:37.899807  982934 main.go:141] libmachine: Reticulating splines...
	I0729 13:12:37.899816  982934 client.go:171] duration metric: took 25.937296783s to LocalClient.Create
	I0729 13:12:37.899851  982934 start.go:167] duration metric: took 25.937370991s to libmachine.API.Create "addons-881745"
	I0729 13:12:37.899865  982934 start.go:293] postStartSetup for "addons-881745" (driver="kvm2")
	I0729 13:12:37.899881  982934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:12:37.899904  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:12:37.900136  982934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:12:37.900161  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:12:37.902176  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.902494  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:37.902523  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.902641  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:12:37.902833  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:37.902979  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:12:37.903098  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:12:37.987037  982934 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:12:37.991158  982934 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:12:37.991190  982934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 13:12:37.991277  982934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 13:12:37.991303  982934 start.go:296] duration metric: took 91.428199ms for postStartSetup
	I0729 13:12:37.991350  982934 main.go:141] libmachine: (addons-881745) Calling .GetConfigRaw
	I0729 13:12:37.991957  982934 main.go:141] libmachine: (addons-881745) Calling .GetIP
	I0729 13:12:37.994604  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.994919  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:37.994941  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.995219  982934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/config.json ...
	I0729 13:12:37.995418  982934 start.go:128] duration metric: took 26.049880989s to createHost
	I0729 13:12:37.995443  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:12:37.997536  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.997888  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:37.997917  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.998044  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:12:37.998221  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:37.998372  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:37.998507  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:12:37.998657  982934 main.go:141] libmachine: Using SSH client type: native
	I0729 13:12:37.998818  982934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0729 13:12:37.998830  982934 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:12:38.104876  982934 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722258758.085146130
	
	I0729 13:12:38.104902  982934 fix.go:216] guest clock: 1722258758.085146130
	I0729 13:12:38.104909  982934 fix.go:229] Guest: 2024-07-29 13:12:38.08514613 +0000 UTC Remote: 2024-07-29 13:12:37.995430948 +0000 UTC m=+26.148363706 (delta=89.715182ms)
	I0729 13:12:38.104951  982934 fix.go:200] guest clock delta is within tolerance: 89.715182ms
	I0729 13:12:38.104957  982934 start.go:83] releasing machines lock for "addons-881745", held for 26.159490127s
	I0729 13:12:38.104980  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:12:38.105250  982934 main.go:141] libmachine: (addons-881745) Calling .GetIP
	I0729 13:12:38.107573  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:38.107881  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:38.107913  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:38.108100  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:12:38.108623  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:12:38.108825  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:12:38.108932  982934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:12:38.108990  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:12:38.109046  982934 ssh_runner.go:195] Run: cat /version.json
	I0729 13:12:38.109070  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:12:38.111554  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:38.111677  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:38.111910  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:38.111934  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:38.112085  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:12:38.112092  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:38.112117  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:38.112225  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:12:38.112311  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:38.112379  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:38.112466  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:12:38.112528  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:12:38.112592  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:12:38.112631  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:12:38.189630  982934 ssh_runner.go:195] Run: systemctl --version
	I0729 13:12:38.214003  982934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:12:38.377207  982934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:12:38.383106  982934 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:12:38.383172  982934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:12:38.399960  982934 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:12:38.399992  982934 start.go:495] detecting cgroup driver to use...
	I0729 13:12:38.400068  982934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:12:38.417578  982934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:12:38.430667  982934 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:12:38.430735  982934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:12:38.443225  982934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:12:38.455891  982934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:12:38.566179  982934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:12:38.719619  982934 docker.go:233] disabling docker service ...
	I0729 13:12:38.719688  982934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:12:38.734511  982934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:12:38.747992  982934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:12:38.886768  982934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:12:39.004857  982934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:12:39.018417  982934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:12:39.036056  982934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:12:39.036121  982934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:12:39.046634  982934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:12:39.046697  982934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:12:39.056495  982934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:12:39.066282  982934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:12:39.076070  982934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:12:39.086092  982934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:12:39.095697  982934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:12:39.111898  982934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:12:39.121485  982934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:12:39.130702  982934 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:12:39.130756  982934 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:12:39.143530  982934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:12:39.152960  982934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:12:39.270597  982934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:12:39.402164  982934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:12:39.402286  982934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:12:39.406943  982934 start.go:563] Will wait 60s for crictl version
	I0729 13:12:39.407017  982934 ssh_runner.go:195] Run: which crictl
	I0729 13:12:39.410615  982934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:12:39.449246  982934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:12:39.449369  982934 ssh_runner.go:195] Run: crio --version
	I0729 13:12:39.480692  982934 ssh_runner.go:195] Run: crio --version
	I0729 13:12:39.513261  982934 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:12:39.514529  982934 main.go:141] libmachine: (addons-881745) Calling .GetIP
	I0729 13:12:39.517235  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:39.517567  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:39.517595  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:39.517832  982934 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 13:12:39.521890  982934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:12:39.533874  982934 kubeadm.go:883] updating cluster {Name:addons-881745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-881745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:12:39.533992  982934 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:12:39.534047  982934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:12:39.569672  982934 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 13:12:39.569740  982934 ssh_runner.go:195] Run: which lz4
	I0729 13:12:39.573850  982934 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 13:12:39.578406  982934 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:12:39.578449  982934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 13:12:40.896125  982934 crio.go:462] duration metric: took 1.322306951s to copy over tarball
	I0729 13:12:40.896207  982934 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:12:43.067072  982934 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.170831369s)
	I0729 13:12:43.067107  982934 crio.go:469] duration metric: took 2.170947633s to extract the tarball
	I0729 13:12:43.067116  982934 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:12:43.106903  982934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:12:43.153274  982934 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:12:43.153302  982934 cache_images.go:84] Images are preloaded, skipping loading
	I0729 13:12:43.153314  982934 kubeadm.go:934] updating node { 192.168.39.103 8443 v1.30.3 crio true true} ...
	I0729 13:12:43.153439  982934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-881745 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.103
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-881745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:12:43.153524  982934 ssh_runner.go:195] Run: crio config
	I0729 13:12:43.207924  982934 cni.go:84] Creating CNI manager for ""
	I0729 13:12:43.207951  982934 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:12:43.207964  982934 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:12:43.207989  982934 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.103 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-881745 NodeName:addons-881745 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.103"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.103 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:12:43.208174  982934 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.103
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-881745"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.103
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.103"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:12:43.208253  982934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 13:12:43.218326  982934 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:12:43.218397  982934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:12:43.227846  982934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0729 13:12:43.244031  982934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:12:43.261201  982934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0729 13:12:43.278636  982934 ssh_runner.go:195] Run: grep 192.168.39.103	control-plane.minikube.internal$ /etc/hosts
	I0729 13:12:43.282381  982934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.103	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:12:43.294256  982934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:12:43.433143  982934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:12:43.449659  982934 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745 for IP: 192.168.39.103
	I0729 13:12:43.449681  982934 certs.go:194] generating shared ca certs ...
	I0729 13:12:43.449711  982934 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:12:43.449861  982934 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 13:12:43.707812  982934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt ...
	I0729 13:12:43.707847  982934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt: {Name:mk1f3b9879e632d5d36f972daaa00444d9485e92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:12:43.708029  982934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key ...
	I0729 13:12:43.708045  982934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key: {Name:mkc81de37ebd536a1b656d14ad97b60baeff9d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:12:43.708159  982934 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 13:12:44.004872  982934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt ...
	I0729 13:12:44.004906  982934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt: {Name:mkf25694e23f33f75a76ea593b701a738621b1e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:12:44.005098  982934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key ...
	I0729 13:12:44.005115  982934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key: {Name:mkf146f5ce823ad2153a06ce2b555e53cb297941 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:12:44.005216  982934 certs.go:256] generating profile certs ...
	I0729 13:12:44.005294  982934 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.key
	I0729 13:12:44.005312  982934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt with IP's: []
	I0729 13:12:44.082140  982934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt ...
	I0729 13:12:44.082174  982934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: {Name:mka4cc3b622a5d75fae1730c6118a6f1db1caa3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:12:44.082322  982934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.key ...
	I0729 13:12:44.082333  982934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.key: {Name:mkb686f7e93a16ce26e027e16c3e1502a18673e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:12:44.082401  982934 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/apiserver.key.0c3e969b
	I0729 13:12:44.082419  982934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/apiserver.crt.0c3e969b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.103]
	I0729 13:12:44.372075  982934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/apiserver.crt.0c3e969b ...
	I0729 13:12:44.372104  982934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/apiserver.crt.0c3e969b: {Name:mk34bd4b76752e3494785ed6b9787f052f0b605d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:12:44.372256  982934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/apiserver.key.0c3e969b ...
	I0729 13:12:44.372270  982934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/apiserver.key.0c3e969b: {Name:mk2e60c9c8fd7053dfd086bf083e7145cca45668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:12:44.372339  982934 certs.go:381] copying /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/apiserver.crt.0c3e969b -> /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/apiserver.crt
	I0729 13:12:44.372442  982934 certs.go:385] copying /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/apiserver.key.0c3e969b -> /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/apiserver.key
	I0729 13:12:44.372486  982934 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/proxy-client.key
	I0729 13:12:44.372504  982934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/proxy-client.crt with IP's: []
	I0729 13:12:44.448296  982934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/proxy-client.crt ...
	I0729 13:12:44.448326  982934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/proxy-client.crt: {Name:mk8304498a5a21b32735469d642d814ee1ca1798 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:12:44.448491  982934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/proxy-client.key ...
	I0729 13:12:44.448504  982934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/proxy-client.key: {Name:mk7f60229bf97d46fb4c2d73ae61c662f3f4ee5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:12:44.448673  982934 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:12:44.448708  982934 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:12:44.448731  982934 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:12:44.448755  982934 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 13:12:44.449430  982934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:12:44.478702  982934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:12:44.501678  982934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:12:44.524992  982934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 13:12:44.547538  982934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 13:12:44.570360  982934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 13:12:44.592721  982934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:12:44.614899  982934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 13:12:44.638123  982934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:12:44.660642  982934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:12:44.675935  982934 ssh_runner.go:195] Run: openssl version
	I0729 13:12:44.681508  982934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:12:44.691356  982934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:12:44.695599  982934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:12:44.695655  982934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:12:44.701355  982934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:12:44.711855  982934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:12:44.715695  982934 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 13:12:44.715749  982934 kubeadm.go:392] StartCluster: {Name:addons-881745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-881745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:12:44.715840  982934 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:12:44.715897  982934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:12:44.750470  982934 cri.go:89] found id: ""
	I0729 13:12:44.750556  982934 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:12:44.760709  982934 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:12:44.769941  982934 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:12:44.779519  982934 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:12:44.779537  982934 kubeadm.go:157] found existing configuration files:
	
	I0729 13:12:44.779576  982934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:12:44.788536  982934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:12:44.788590  982934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:12:44.797726  982934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:12:44.806209  982934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:12:44.806259  982934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:12:44.815032  982934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:12:44.823490  982934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:12:44.823538  982934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:12:44.832451  982934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:12:44.841179  982934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:12:44.841226  982934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:12:44.849802  982934 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:12:44.905770  982934 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 13:12:44.905908  982934 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:12:45.041407  982934 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:12:45.041567  982934 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:12:45.041713  982934 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:12:45.270727  982934 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:12:45.396728  982934 out.go:204]   - Generating certificates and keys ...
	I0729 13:12:45.396871  982934 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:12:45.396971  982934 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:12:45.397066  982934 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 13:12:45.493817  982934 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 13:12:45.712513  982934 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 13:12:45.795441  982934 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 13:12:46.020295  982934 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 13:12:46.020465  982934 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-881745 localhost] and IPs [192.168.39.103 127.0.0.1 ::1]
	I0729 13:12:46.315872  982934 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 13:12:46.316021  982934 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-881745 localhost] and IPs [192.168.39.103 127.0.0.1 ::1]
	I0729 13:12:46.389329  982934 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 13:12:46.463885  982934 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 13:12:46.579876  982934 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 13:12:46.579981  982934 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:12:46.694566  982934 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:12:46.930327  982934 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 13:12:47.032624  982934 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:12:47.118168  982934 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:12:47.403080  982934 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:12:47.403630  982934 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:12:47.405981  982934 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:12:47.407483  982934 out.go:204]   - Booting up control plane ...
	I0729 13:12:47.407562  982934 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:12:47.407652  982934 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:12:47.408154  982934 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:12:47.427983  982934 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:12:47.430812  982934 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:12:47.430868  982934 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:12:47.561774  982934 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 13:12:47.561889  982934 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 13:12:48.062427  982934 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.964177ms
	I0729 13:12:48.062512  982934 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 13:12:52.564225  982934 kubeadm.go:310] [api-check] The API server is healthy after 4.501999443s
	I0729 13:12:52.574752  982934 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 13:12:52.595522  982934 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 13:12:52.627486  982934 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 13:12:52.627705  982934 kubeadm.go:310] [mark-control-plane] Marking the node addons-881745 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 13:12:52.641402  982934 kubeadm.go:310] [bootstrap-token] Using token: ot71em.9uxc3f9qsdcuq9f5
	I0729 13:12:52.642822  982934 out.go:204]   - Configuring RBAC rules ...
	I0729 13:12:52.642944  982934 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 13:12:52.646755  982934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 13:12:52.652949  982934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 13:12:52.657939  982934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 13:12:52.663016  982934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 13:12:52.665859  982934 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 13:12:52.968601  982934 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 13:12:53.400501  982934 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 13:12:53.970229  982934 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 13:12:53.970751  982934 kubeadm.go:310] 
	I0729 13:12:53.970823  982934 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 13:12:53.970840  982934 kubeadm.go:310] 
	I0729 13:12:53.970926  982934 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 13:12:53.970935  982934 kubeadm.go:310] 
	I0729 13:12:53.971001  982934 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 13:12:53.971456  982934 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 13:12:53.971519  982934 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 13:12:53.971527  982934 kubeadm.go:310] 
	I0729 13:12:53.971595  982934 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 13:12:53.971604  982934 kubeadm.go:310] 
	I0729 13:12:53.971642  982934 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 13:12:53.971649  982934 kubeadm.go:310] 
	I0729 13:12:53.971690  982934 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 13:12:53.971760  982934 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 13:12:53.971825  982934 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 13:12:53.971831  982934 kubeadm.go:310] 
	I0729 13:12:53.971910  982934 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 13:12:53.971978  982934 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 13:12:53.971984  982934 kubeadm.go:310] 
	I0729 13:12:53.972119  982934 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ot71em.9uxc3f9qsdcuq9f5 \
	I0729 13:12:53.972271  982934 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 \
	I0729 13:12:53.972304  982934 kubeadm.go:310] 	--control-plane 
	I0729 13:12:53.972314  982934 kubeadm.go:310] 
	I0729 13:12:53.972439  982934 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 13:12:53.972450  982934 kubeadm.go:310] 
	I0729 13:12:53.972561  982934 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ot71em.9uxc3f9qsdcuq9f5 \
	I0729 13:12:53.972697  982934 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 
	I0729 13:12:53.973316  982934 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:12:53.973471  982934 cni.go:84] Creating CNI manager for ""
	I0729 13:12:53.973489  982934 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:12:53.975038  982934 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:12:53.976167  982934 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:12:53.987551  982934 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:12:54.006767  982934 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 13:12:54.006877  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-881745 minikube.k8s.io/updated_at=2024_07_29T13_12_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411 minikube.k8s.io/name=addons-881745 minikube.k8s.io/primary=true
	I0729 13:12:54.006877  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:12:54.040528  982934 ops.go:34] apiserver oom_adj: -16
	I0729 13:12:54.106539  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:12:54.607265  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:12:55.106823  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:12:55.607114  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:12:56.106927  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:12:56.606624  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:12:57.107513  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:12:57.607394  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:12:58.106969  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:12:58.606846  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:12:59.107519  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:12:59.606601  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:00.106730  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:00.607490  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:01.107013  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:01.606588  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:02.107516  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:02.607583  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:03.107283  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:03.606925  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:04.106767  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:04.606846  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:05.106655  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:05.606952  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:06.107272  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:06.606704  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:07.107284  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:07.202825  982934 kubeadm.go:1113] duration metric: took 13.196015361s to wait for elevateKubeSystemPrivileges
	I0729 13:13:07.202872  982934 kubeadm.go:394] duration metric: took 22.487128484s to StartCluster
	I0729 13:13:07.202902  982934 settings.go:142] acquiring lock: {Name:mke61e73d7bb1a5bd9c2f4c9e9bba0a07b199ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:13:07.203068  982934 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 13:13:07.203547  982934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:13:07.203757  982934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 13:13:07.203792  982934 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:13:07.203881  982934 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0729 13:13:07.204005  982934 addons.go:69] Setting yakd=true in profile "addons-881745"
	I0729 13:13:07.204040  982934 config.go:182] Loaded profile config "addons-881745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:13:07.204051  982934 addons.go:234] Setting addon yakd=true in "addons-881745"
	I0729 13:13:07.204046  982934 addons.go:69] Setting inspektor-gadget=true in profile "addons-881745"
	I0729 13:13:07.204058  982934 addons.go:69] Setting gcp-auth=true in profile "addons-881745"
	I0729 13:13:07.204089  982934 addons.go:234] Setting addon inspektor-gadget=true in "addons-881745"
	I0729 13:13:07.204091  982934 addons.go:69] Setting ingress=true in profile "addons-881745"
	I0729 13:13:07.204085  982934 addons.go:69] Setting storage-provisioner=true in profile "addons-881745"
	I0729 13:13:07.204099  982934 mustload.go:65] Loading cluster: addons-881745
	I0729 13:13:07.204111  982934 addons.go:69] Setting volcano=true in profile "addons-881745"
	I0729 13:13:07.204120  982934 addons.go:234] Setting addon storage-provisioner=true in "addons-881745"
	I0729 13:13:07.204118  982934 addons.go:69] Setting helm-tiller=true in profile "addons-881745"
	I0729 13:13:07.204130  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.204133  982934 addons.go:234] Setting addon volcano=true in "addons-881745"
	I0729 13:13:07.204140  982934 addons.go:234] Setting addon helm-tiller=true in "addons-881745"
	I0729 13:13:07.204148  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.204152  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.204168  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.204238  982934 addons.go:69] Setting metrics-server=true in profile "addons-881745"
	I0729 13:13:07.204270  982934 addons.go:234] Setting addon metrics-server=true in "addons-881745"
	I0729 13:13:07.204290  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.204321  982934 config.go:182] Loaded profile config "addons-881745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:13:07.204629  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.204662  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.204678  982934 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-881745"
	I0729 13:13:07.204706  982934 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-881745"
	I0729 13:13:07.204720  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.204735  982934 addons.go:69] Setting registry=true in profile "addons-881745"
	I0729 13:13:07.204739  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.204751  982934 addons.go:234] Setting addon registry=true in "addons-881745"
	I0729 13:13:07.204774  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.204780  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.204794  982934 addons.go:69] Setting default-storageclass=true in profile "addons-881745"
	I0729 13:13:07.204809  982934 addons.go:69] Setting ingress-dns=true in profile "addons-881745"
	I0729 13:13:07.204825  982934 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-881745"
	I0729 13:13:07.204797  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.204840  982934 addons.go:234] Setting addon ingress-dns=true in "addons-881745"
	I0729 13:13:07.204726  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.204872  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.204112  982934 addons.go:234] Setting addon ingress=true in "addons-881745"
	I0729 13:13:07.204094  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.204101  982934 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-881745"
	I0729 13:13:07.205110  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.205128  982934 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-881745"
	I0729 13:13:07.204678  982934 addons.go:69] Setting volumesnapshots=true in profile "addons-881745"
	I0729 13:13:07.205141  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.205152  982934 addons.go:234] Setting addon volumesnapshots=true in "addons-881745"
	I0729 13:13:07.204661  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.205179  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.205182  982934 addons.go:69] Setting cloud-spanner=true in profile "addons-881745"
	I0729 13:13:07.205203  982934 addons.go:234] Setting addon cloud-spanner=true in "addons-881745"
	I0729 13:13:07.205211  982934 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-881745"
	I0729 13:13:07.205249  982934 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-881745"
	I0729 13:13:07.205271  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.204661  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.205304  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.205342  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.205292  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.205409  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.205433  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.205444  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.205455  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.205474  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.205476  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.205497  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.205610  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.205617  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.205633  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.205643  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.205768  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.205890  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.205909  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.205916  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.206018  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.206053  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.206102  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.206128  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.209063  982934 out.go:177] * Verifying Kubernetes components...
	I0729 13:13:07.211091  982934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:13:07.225432  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40611
	I0729 13:13:07.225528  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40797
	I0729 13:13:07.225590  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43673
	I0729 13:13:07.225634  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34553
	I0729 13:13:07.228797  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.228853  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.240402  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.240461  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.240515  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.241083  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.241104  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.241254  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.241265  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.241383  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.241394  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.242232  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.242279  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.242287  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.243078  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.243081  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.243125  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.243155  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.243369  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.244981  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.245535  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.245556  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.245983  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.246043  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.246425  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.246457  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.247168  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.247193  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.253089  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33761
	I0729 13:13:07.253639  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.254220  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.254238  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.254621  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.255210  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.255235  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.266769  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38599
	I0729 13:13:07.267329  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34741
	I0729 13:13:07.267852  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.268486  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.268504  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.270328  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.270764  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.270869  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43167
	I0729 13:13:07.271211  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.271689  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.271710  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.271843  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.271862  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.272221  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.272401  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41599
	I0729 13:13:07.272968  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.273015  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.273054  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.273270  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.273321  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.273961  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.273986  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.274533  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.275070  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.275097  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.275291  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39557
	I0729 13:13:07.275843  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.276445  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.276468  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.276821  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.276989  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.278292  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.278488  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.279167  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.282511  982934 addons.go:234] Setting addon default-storageclass=true in "addons-881745"
	I0729 13:13:07.282560  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.282943  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.282986  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.283217  982934 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0729 13:13:07.284835  982934 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 13:13:07.284854  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0729 13:13:07.284873  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.285818  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41021
	I0729 13:13:07.286336  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40181
	I0729 13:13:07.286780  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.287263  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.287283  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.287652  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.287690  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36265
	I0729 13:13:07.287926  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.288220  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.288245  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.288774  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.288801  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.288862  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.288895  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.289492  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.289510  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.289575  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.289589  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.289615  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.289784  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.289837  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.289944  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.289946  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.290028  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:07.291031  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.291071  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.291695  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37685
	I0729 13:13:07.291835  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40865
	I0729 13:13:07.292179  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.292674  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.292693  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.292768  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.293160  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.293371  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.294350  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.294374  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.294979  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.295033  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.295232  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.295734  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37395
	I0729 13:13:07.296246  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.296510  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.297298  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.297345  982934 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0729 13:13:07.298143  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.298160  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.298371  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45637
	I0729 13:13:07.298544  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.298880  982934 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0729 13:13:07.298901  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0729 13:13:07.298919  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.298949  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.299188  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.301083  982934 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0729 13:13:07.301512  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.302163  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.302180  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.302487  982934 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0729 13:13:07.302507  982934 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0729 13:13:07.302527  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.302540  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42207
	I0729 13:13:07.302600  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.302630  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.302682  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43325
	I0729 13:13:07.303051  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.303071  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.303093  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.303317  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.303526  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.303691  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.303705  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.303813  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.304385  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.304404  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.304458  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.304672  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.305033  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.305690  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.305733  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.307221  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.307251  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.307272  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.307299  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.307361  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43437
	I0729 13:13:07.307517  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.307696  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.307705  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.308076  982934 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-881745"
	I0729 13:13:07.308128  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.308540  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.308587  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.308905  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.308922  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.308983  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:07.309248  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.309425  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.309564  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:07.310231  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36751
	I0729 13:13:07.310633  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.311135  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.311152  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.311505  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.312102  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.312139  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.312837  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.313334  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.313492  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.313533  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.315281  982934 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0729 13:13:07.316437  982934 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 13:13:07.316466  982934 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 13:13:07.316487  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.321035  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39027
	I0729 13:13:07.321901  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.323498  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.324030  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.324061  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.324448  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.324472  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.324545  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.324748  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.324928  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.325101  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:07.325467  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.325856  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.326844  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41353
	I0729 13:13:07.327259  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.327746  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.327769  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.328046  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.328295  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.329091  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.330098  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.331016  982934 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0729 13:13:07.331130  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41397
	I0729 13:13:07.331664  982934 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0729 13:13:07.331771  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.332736  982934 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 13:13:07.332766  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0729 13:13:07.332792  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.332736  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.332852  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.333186  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.334190  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.334340  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.334641  982934 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 13:13:07.335768  982934 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 13:13:07.336229  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.336265  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43011
	I0729 13:13:07.336913  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.336944  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.337116  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.337223  982934 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 13:13:07.337251  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0729 13:13:07.337273  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.337312  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.337487  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.337731  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:07.338699  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38219
	I0729 13:13:07.340879  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.341048  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.341081  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.341259  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.341469  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.341631  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.341803  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:07.341817  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38737
	I0729 13:13:07.342351  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.342759  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.342792  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.343109  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.343300  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.344626  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.346322  982934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0729 13:13:07.347855  982934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0729 13:13:07.349165  982934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0729 13:13:07.350448  982934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0729 13:13:07.350835  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36757
	I0729 13:13:07.351034  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37375
	I0729 13:13:07.351279  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.351440  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.352042  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.352063  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.352162  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.352178  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.352489  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.352520  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.352697  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.352724  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.353058  982934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0729 13:13:07.354476  982934 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0729 13:13:07.355606  982934 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0729 13:13:07.355806  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36741
	I0729 13:13:07.355813  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.355881  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.356380  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.356920  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.356955  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.357371  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.357607  982934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0729 13:13:07.357608  982934 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0729 13:13:07.357670  982934 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0729 13:13:07.357985  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.358026  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.358890  982934 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0729 13:13:07.358911  982934 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0729 13:13:07.358931  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.358984  982934 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0729 13:13:07.359000  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0729 13:13:07.359019  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.359085  982934 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0729 13:13:07.359093  982934 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0729 13:13:07.359107  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.361426  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.361476  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.362068  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.362137  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.363029  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.363048  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.363266  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.363428  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.363677  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.363683  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.363708  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.363784  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.363941  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.364077  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.364284  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:07.364869  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.365284  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.366662  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44811
	I0729 13:13:07.367102  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.367117  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.368155  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.368656  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.368662  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0729 13:13:07.368996  982934 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:13:07.369002  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.369355  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:07.369368  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:07.369505  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.369524  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.369573  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.369583  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.369608  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.369616  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.369643  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.369821  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.369896  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.369937  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:07.369945  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:07.369953  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:07.369959  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:07.370144  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:07.370170  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:07.370178  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	W0729 13:13:07.370256  982934 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0729 13:13:07.370450  982934 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:13:07.370464  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 13:13:07.370481  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.370551  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.370595  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.370849  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.370866  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.370931  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.370965  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.371550  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:07.371682  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.371949  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:07.372107  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.373819  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.373863  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.374085  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.374140  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.374261  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.374278  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.374411  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.374588  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.374751  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.374935  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:07.375806  982934 out.go:177]   - Using image docker.io/registry:2.8.3
	I0729 13:13:07.375859  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.377394  982934 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0729 13:13:07.379019  982934 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0729 13:13:07.379092  982934 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0729 13:13:07.379109  982934 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0729 13:13:07.379125  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.380589  982934 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0729 13:13:07.380612  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0729 13:13:07.380628  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.382066  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.382413  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.382438  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.382595  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.382798  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.382992  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.383153  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:07.384296  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.384800  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.384824  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.385026  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.385228  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.385398  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.385563  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	W0729 13:13:07.386060  982934 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48332->192.168.39.103:22: read: connection reset by peer
	I0729 13:13:07.386091  982934 retry.go:31] will retry after 133.910263ms: ssh: handshake failed: read tcp 192.168.39.1:48332->192.168.39.103:22: read: connection reset by peer
	I0729 13:13:07.386985  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45583
	I0729 13:13:07.387416  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.387821  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.387845  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.388195  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.388384  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.388536  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34661
	I0729 13:13:07.389117  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.389612  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.389631  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.389697  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.389953  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.390176  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.391464  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.391482  982934 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0729 13:13:07.391743  982934 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 13:13:07.391761  982934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 13:13:07.391779  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.393746  982934 out.go:177]   - Using image docker.io/busybox:stable
	I0729 13:13:07.394330  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.394786  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.394808  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.394980  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.395137  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.395199  982934 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 13:13:07.395216  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0729 13:13:07.395237  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.395263  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.395756  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:07.397995  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.398343  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.398366  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.398516  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.398699  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.398857  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.398999  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	W0729 13:13:07.521004  982934 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48364->192.168.39.103:22: read: connection reset by peer
	I0729 13:13:07.521035  982934 retry.go:31] will retry after 466.463671ms: ssh: handshake failed: read tcp 192.168.39.1:48364->192.168.39.103:22: read: connection reset by peer
	I0729 13:13:07.664821  982934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:13:07.664838  982934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 13:13:07.745648  982934 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0729 13:13:07.745678  982934 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0729 13:13:07.792889  982934 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 13:13:07.792917  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0729 13:13:07.830723  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0729 13:13:07.833402  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 13:13:07.846476  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 13:13:07.865509  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:13:07.875346  982934 node_ready.go:35] waiting up to 6m0s for node "addons-881745" to be "Ready" ...
	I0729 13:13:07.878373  982934 node_ready.go:49] node "addons-881745" has status "Ready":"True"
	I0729 13:13:07.878396  982934 node_ready.go:38] duration metric: took 3.018136ms for node "addons-881745" to be "Ready" ...
	I0729 13:13:07.878407  982934 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:13:07.885206  982934 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bdkkm" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:07.924362  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 13:13:07.926015  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 13:13:07.930387  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 13:13:07.936884  982934 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0729 13:13:07.936900  982934 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0729 13:13:07.950284  982934 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0729 13:13:07.950305  982934 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0729 13:13:07.992465  982934 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0729 13:13:07.992502  982934 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0729 13:13:08.018598  982934 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0729 13:13:08.018630  982934 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0729 13:13:08.027723  982934 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 13:13:08.027748  982934 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 13:13:08.045911  982934 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0729 13:13:08.045938  982934 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0729 13:13:08.101638  982934 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0729 13:13:08.101671  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0729 13:13:08.149228  982934 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 13:13:08.149258  982934 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0729 13:13:08.171543  982934 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:13:08.171567  982934 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 13:13:08.180882  982934 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0729 13:13:08.180908  982934 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0729 13:13:08.228089  982934 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0729 13:13:08.228119  982934 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0729 13:13:08.240975  982934 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0729 13:13:08.241002  982934 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0729 13:13:08.352491  982934 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0729 13:13:08.352518  982934 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0729 13:13:08.354120  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:13:08.365987  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0729 13:13:08.406936  982934 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0729 13:13:08.406971  982934 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0729 13:13:08.407537  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 13:13:08.497281  982934 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0729 13:13:08.497313  982934 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0729 13:13:08.500368  982934 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0729 13:13:08.500386  982934 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0729 13:13:08.591103  982934 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0729 13:13:08.591141  982934 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0729 13:13:08.659214  982934 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0729 13:13:08.659242  982934 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0729 13:13:08.676940  982934 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0729 13:13:08.676968  982934 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0729 13:13:08.728110  982934 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0729 13:13:08.728134  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0729 13:13:08.887605  982934 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0729 13:13:08.887628  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0729 13:13:08.900489  982934 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0729 13:13:08.900535  982934 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0729 13:13:08.938882  982934 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0729 13:13:08.938914  982934 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0729 13:13:09.109801  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0729 13:13:09.137287  982934 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0729 13:13:09.137314  982934 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0729 13:13:09.170943  982934 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0729 13:13:09.170970  982934 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0729 13:13:09.177342  982934 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 13:13:09.177364  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0729 13:13:09.457622  982934 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0729 13:13:09.457649  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0729 13:13:09.487008  982934 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0729 13:13:09.487040  982934 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0729 13:13:09.504900  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 13:13:09.676868  982934 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0729 13:13:09.676906  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0729 13:13:09.837675  982934 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 13:13:09.837708  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0729 13:13:09.898201  982934 pod_ready.go:102] pod "coredns-7db6d8ff4d-bdkkm" in "kube-system" namespace has status "Ready":"False"
	I0729 13:13:09.994214  982934 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.329274493s)
	I0729 13:13:09.994247  982934 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 13:13:10.135793  982934 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 13:13:10.135823  982934 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0729 13:13:10.173833  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 13:13:10.432856  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 13:13:10.498456  982934 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-881745" context rescaled to 1 replicas
	I0729 13:13:10.846703  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.015935057s)
	I0729 13:13:10.846771  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:10.846785  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:10.847227  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:10.847235  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:10.847252  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:10.847281  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:10.847298  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:10.847574  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:10.847596  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:12.000846  982934 pod_ready.go:92] pod "coredns-7db6d8ff4d-bdkkm" in "kube-system" namespace has status "Ready":"True"
	I0729 13:13:12.000883  982934 pod_ready.go:81] duration metric: took 4.115651178s for pod "coredns-7db6d8ff4d-bdkkm" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:12.000898  982934 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nnxtv" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:12.066890  982934 pod_ready.go:92] pod "coredns-7db6d8ff4d-nnxtv" in "kube-system" namespace has status "Ready":"True"
	I0729 13:13:12.066918  982934 pod_ready.go:81] duration metric: took 66.012864ms for pod "coredns-7db6d8ff4d-nnxtv" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:12.066929  982934 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-881745" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:12.139792  982934 pod_ready.go:92] pod "etcd-addons-881745" in "kube-system" namespace has status "Ready":"True"
	I0729 13:13:12.139815  982934 pod_ready.go:81] duration metric: took 72.880094ms for pod "etcd-addons-881745" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:12.139826  982934 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-881745" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:12.197077  982934 pod_ready.go:92] pod "kube-apiserver-addons-881745" in "kube-system" namespace has status "Ready":"True"
	I0729 13:13:12.197100  982934 pod_ready.go:81] duration metric: took 57.268132ms for pod "kube-apiserver-addons-881745" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:12.197111  982934 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-881745" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:12.265734  982934 pod_ready.go:92] pod "kube-controller-manager-addons-881745" in "kube-system" namespace has status "Ready":"True"
	I0729 13:13:12.265764  982934 pod_ready.go:81] duration metric: took 68.644666ms for pod "kube-controller-manager-addons-881745" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:12.265780  982934 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6h84v" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:12.358540  982934 pod_ready.go:92] pod "kube-proxy-6h84v" in "kube-system" namespace has status "Ready":"True"
	I0729 13:13:12.358565  982934 pod_ready.go:81] duration metric: took 92.77809ms for pod "kube-proxy-6h84v" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:12.358574  982934 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-881745" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:12.754065  982934 pod_ready.go:92] pod "kube-scheduler-addons-881745" in "kube-system" namespace has status "Ready":"True"
	I0729 13:13:12.754103  982934 pod_ready.go:81] duration metric: took 395.520053ms for pod "kube-scheduler-addons-881745" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:12.754117  982934 pod_ready.go:38] duration metric: took 4.875693531s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:13:12.754139  982934 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:13:12.754217  982934 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:13:14.314978  982934 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0729 13:13:14.315022  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:14.318366  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:14.318765  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:14.318786  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:14.319034  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:14.319278  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:14.319442  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:14.319609  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:14.663843  982934 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0729 13:13:14.867487  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.034040042s)
	I0729 13:13:14.867547  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.867548  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.02103527s)
	I0729 13:13:14.867560  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.867578  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.867591  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.867635  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.002089746s)
	I0729 13:13:14.867682  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.943289426s)
	I0729 13:13:14.867707  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.867730  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.867737  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.941683967s)
	I0729 13:13:14.867683  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.867794  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.867771  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.867835  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.867883  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.867890  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.867898  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.867905  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.868124  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.868140  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.868136  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.513986484s)
	I0729 13:13:14.868151  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.868161  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.868170  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.868184  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.868277  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.502260874s)
	I0729 13:13:14.868280  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.868284  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.868295  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.868322  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.868322  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.868329  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.868335  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.868338  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.460777083s)
	I0729 13:13:14.868343  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.868350  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.868354  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.868364  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.868405  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.868428  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.868436  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.868442  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.868484  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.868490  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.868500  982934 addons.go:475] Verifying addon ingress=true in "addons-881745"
	I0729 13:13:14.868763  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.868795  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.868815  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.868851  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.869046  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.869070  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.869075  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.869082  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.869089  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.869131  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.869151  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.869157  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.869164  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.869170  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.869514  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.937383876s)
	I0729 13:13:14.869543  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.869552  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.869858  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.869882  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.869889  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.869895  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.869902  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.870273  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.870298  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.870304  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.870335  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.870352  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.870358  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.870365  982934 addons.go:475] Verifying addon registry=true in "addons-881745"
	I0729 13:13:14.870752  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.760915055s)
	I0729 13:13:14.870783  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.870795  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.870911  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.365975742s)
	I0729 13:13:14.870936  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.870953  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.871026  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.871054  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.871062  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.871070  982934 addons.go:475] Verifying addon metrics-server=true in "addons-881745"
	I0729 13:13:14.871346  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.871383  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.871391  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.871399  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.871406  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.871470  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.871494  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.871501  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.871508  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.871526  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.872031  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.872062  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.872069  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.873237  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.873266  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.873272  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.874216  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.874227  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.874242  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.874249  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.874250  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.874265  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.874305  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.874231  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.874317  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.874324  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.874414  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.874421  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.874824  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.874897  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.874917  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.874960  982934 out.go:177] * Verifying registry addon...
	I0729 13:13:14.875109  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.875137  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.875144  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.875352  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.875382  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.875389  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.875525  982934 out.go:177] * Verifying ingress addon...
	I0729 13:13:14.876940  982934 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-881745 service yakd-dashboard -n yakd-dashboard
	
	I0729 13:13:14.877098  982934 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0729 13:13:14.877659  982934 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0729 13:13:14.959571  982934 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0729 13:13:14.959593  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:14.969105  982934 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0729 13:13:14.969136  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:14.986185  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.986209  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.986536  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.986559  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.986603  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	W0729 13:13:14.986687  982934 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0729 13:13:15.020692  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:15.020719  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:15.021124  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:15.021145  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:15.022472  982934 addons.go:234] Setting addon gcp-auth=true in "addons-881745"
	I0729 13:13:15.022533  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:15.023001  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:15.023044  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:15.037556  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41103
	I0729 13:13:15.038028  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:15.038600  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:15.038631  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:15.038994  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:15.039575  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:15.039611  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:15.053954  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43667
	I0729 13:13:15.054403  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:15.054941  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:15.054972  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:15.055307  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:15.055493  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:15.056952  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:15.057194  982934 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0729 13:13:15.057221  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:15.059739  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:15.060179  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:15.060227  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:15.060302  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:15.060541  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:15.060702  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:15.060832  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:15.405232  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:15.406518  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:15.458661  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.284772514s)
	W0729 13:13:15.458720  982934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 13:13:15.458773  982934 retry.go:31] will retry after 285.489337ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 13:13:15.745019  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 13:13:15.897484  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:15.900064  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:16.381473  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:16.382827  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:16.894309  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:16.894529  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:17.264080  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.831151507s)
	I0729 13:13:17.264149  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:17.264160  982934 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.206940963s)
	I0729 13:13:17.264166  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:17.264086  982934 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.509839577s)
	I0729 13:13:17.264432  982934 api_server.go:72] duration metric: took 10.060599333s to wait for apiserver process to appear ...
	I0729 13:13:17.264444  982934 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:13:17.264466  982934 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0729 13:13:17.264623  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:17.264641  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:17.264652  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:17.264660  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:17.264920  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:17.265035  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:17.265055  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:17.265072  982934 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-881745"
	I0729 13:13:17.266425  982934 out.go:177] * Verifying csi-hostpath-driver addon...
	I0729 13:13:17.266434  982934 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 13:13:17.268449  982934 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0729 13:13:17.269311  982934 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0729 13:13:17.269772  982934 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0729 13:13:17.269794  982934 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0729 13:13:17.286885  982934 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0729 13:13:17.286903  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:17.291000  982934 api_server.go:279] https://192.168.39.103:8443/healthz returned 200:
	ok
	I0729 13:13:17.292944  982934 api_server.go:141] control plane version: v1.30.3
	I0729 13:13:17.292966  982934 api_server.go:131] duration metric: took 28.514891ms to wait for apiserver health ...
	I0729 13:13:17.292975  982934 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:13:17.314853  982934 system_pods.go:59] 19 kube-system pods found
	I0729 13:13:17.314888  982934 system_pods.go:61] "coredns-7db6d8ff4d-bdkkm" [563ebe0c-e2ff-41b8-b21f-cdcd975e3c60] Running
	I0729 13:13:17.314894  982934 system_pods.go:61] "coredns-7db6d8ff4d-nnxtv" [f58f7ace-0e01-40e5-896c-890372ed6c46] Running
	I0729 13:13:17.314905  982934 system_pods.go:61] "csi-hostpath-attacher-0" [649a67d2-d600-48f0-b5c2-2ea497257b21] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0729 13:13:17.314912  982934 system_pods.go:61] "csi-hostpath-resizer-0" [f0cc70bb-8614-4719-96e4-e3a0bc9675cc] Pending
	I0729 13:13:17.314920  982934 system_pods.go:61] "csi-hostpathplugin-g7jgm" [15890aa7-f2ca-4378-82f6-fd2f7c53e367] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0729 13:13:17.314928  982934 system_pods.go:61] "etcd-addons-881745" [6a19058a-445b-42ed-a2ef-1507df695a9b] Running
	I0729 13:13:17.314932  982934 system_pods.go:61] "kube-apiserver-addons-881745" [0e3895dd-a162-40f5-b3eb-4988ff45bc70] Running
	I0729 13:13:17.314935  982934 system_pods.go:61] "kube-controller-manager-addons-881745" [7f441c06-71eb-40d9-b398-c45e2db0b580] Running
	I0729 13:13:17.314940  982934 system_pods.go:61] "kube-ingress-dns-minikube" [3782b3a7-db92-4e0b-9a46-55e0b7de43be] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0729 13:13:17.314944  982934 system_pods.go:61] "kube-proxy-6h84v" [241d7377-0c07-446a-8758-72ee113b1999] Running
	I0729 13:13:17.314947  982934 system_pods.go:61] "kube-scheduler-addons-881745" [1d98f0bf-f865-40c8-a430-0eb7fc1967a0] Running
	I0729 13:13:17.314952  982934 system_pods.go:61] "metrics-server-c59844bb4-5nbcm" [c04ce8b2-943f-4d1d-afd0-7e1d7d17e36f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:13:17.314968  982934 system_pods.go:61] "nvidia-device-plugin-daemonset-2mgsg" [e736085a-8a65-4ef9-a69b-d309fa46e0b7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0729 13:13:17.314976  982934 system_pods.go:61] "registry-656c9c8d9c-z2p5s" [96173b90-f986-42c3-8dab-68759432df0d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0729 13:13:17.314982  982934 system_pods.go:61] "registry-proxy-ljt48" [28156caa-8805-4e7d-a425-0e65cdbb245b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0729 13:13:17.315023  982934 system_pods.go:61] "snapshot-controller-745499f584-kh4vd" [5883f9c2-34cc-4011-880b-d4900758b609] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 13:13:17.315037  982934 system_pods.go:61] "snapshot-controller-745499f584-v464c" [edf1a75b-61a6-4cec-9bcf-df387ceb3aa6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 13:13:17.315041  982934 system_pods.go:61] "storage-provisioner" [b4440573-37d1-4041-a657-1e7338cd83c0] Running
	I0729 13:13:17.315046  982934 system_pods.go:61] "tiller-deploy-6677d64bcd-h58h7" [37213271-b4d7-4a89-bd83-34aacc2ec941] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0729 13:13:17.315051  982934 system_pods.go:74] duration metric: took 22.071584ms to wait for pod list to return data ...
	I0729 13:13:17.315067  982934 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:13:17.323857  982934 default_sa.go:45] found service account: "default"
	I0729 13:13:17.323883  982934 default_sa.go:55] duration metric: took 8.805986ms for default service account to be created ...
	I0729 13:13:17.323895  982934 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 13:13:17.340868  982934 system_pods.go:86] 19 kube-system pods found
	I0729 13:13:17.340895  982934 system_pods.go:89] "coredns-7db6d8ff4d-bdkkm" [563ebe0c-e2ff-41b8-b21f-cdcd975e3c60] Running
	I0729 13:13:17.340900  982934 system_pods.go:89] "coredns-7db6d8ff4d-nnxtv" [f58f7ace-0e01-40e5-896c-890372ed6c46] Running
	I0729 13:13:17.340907  982934 system_pods.go:89] "csi-hostpath-attacher-0" [649a67d2-d600-48f0-b5c2-2ea497257b21] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0729 13:13:17.340912  982934 system_pods.go:89] "csi-hostpath-resizer-0" [f0cc70bb-8614-4719-96e4-e3a0bc9675cc] Pending
	I0729 13:13:17.340922  982934 system_pods.go:89] "csi-hostpathplugin-g7jgm" [15890aa7-f2ca-4378-82f6-fd2f7c53e367] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0729 13:13:17.340927  982934 system_pods.go:89] "etcd-addons-881745" [6a19058a-445b-42ed-a2ef-1507df695a9b] Running
	I0729 13:13:17.340931  982934 system_pods.go:89] "kube-apiserver-addons-881745" [0e3895dd-a162-40f5-b3eb-4988ff45bc70] Running
	I0729 13:13:17.340935  982934 system_pods.go:89] "kube-controller-manager-addons-881745" [7f441c06-71eb-40d9-b398-c45e2db0b580] Running
	I0729 13:13:17.340970  982934 system_pods.go:89] "kube-ingress-dns-minikube" [3782b3a7-db92-4e0b-9a46-55e0b7de43be] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0729 13:13:17.340977  982934 system_pods.go:89] "kube-proxy-6h84v" [241d7377-0c07-446a-8758-72ee113b1999] Running
	I0729 13:13:17.340982  982934 system_pods.go:89] "kube-scheduler-addons-881745" [1d98f0bf-f865-40c8-a430-0eb7fc1967a0] Running
	I0729 13:13:17.340991  982934 system_pods.go:89] "metrics-server-c59844bb4-5nbcm" [c04ce8b2-943f-4d1d-afd0-7e1d7d17e36f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:13:17.340999  982934 system_pods.go:89] "nvidia-device-plugin-daemonset-2mgsg" [e736085a-8a65-4ef9-a69b-d309fa46e0b7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0729 13:13:17.341007  982934 system_pods.go:89] "registry-656c9c8d9c-z2p5s" [96173b90-f986-42c3-8dab-68759432df0d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0729 13:13:17.341016  982934 system_pods.go:89] "registry-proxy-ljt48" [28156caa-8805-4e7d-a425-0e65cdbb245b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0729 13:13:17.341023  982934 system_pods.go:89] "snapshot-controller-745499f584-kh4vd" [5883f9c2-34cc-4011-880b-d4900758b609] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 13:13:17.341032  982934 system_pods.go:89] "snapshot-controller-745499f584-v464c" [edf1a75b-61a6-4cec-9bcf-df387ceb3aa6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 13:13:17.341038  982934 system_pods.go:89] "storage-provisioner" [b4440573-37d1-4041-a657-1e7338cd83c0] Running
	I0729 13:13:17.341044  982934 system_pods.go:89] "tiller-deploy-6677d64bcd-h58h7" [37213271-b4d7-4a89-bd83-34aacc2ec941] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0729 13:13:17.341052  982934 system_pods.go:126] duration metric: took 17.152648ms to wait for k8s-apps to be running ...
	I0729 13:13:17.341060  982934 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 13:13:17.341112  982934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:13:17.384237  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:17.384237  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:17.421922  982934 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0729 13:13:17.421948  982934 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0729 13:13:17.466758  982934 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 13:13:17.466783  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0729 13:13:17.518271  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 13:13:17.775039  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:17.890102  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:17.893184  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:18.283762  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:18.383359  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:18.383925  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:18.778455  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:18.888428  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:18.888691  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:19.307224  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:19.514046  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:19.514224  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:19.682214  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.937129618s)
	I0729 13:13:19.682295  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.163988798s)
	I0729 13:13:19.682231  982934 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.341094592s)
	I0729 13:13:19.682324  982934 system_svc.go:56] duration metric: took 2.341257823s WaitForService to wait for kubelet
	I0729 13:13:19.682330  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:19.682296  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:19.682337  982934 kubeadm.go:582] duration metric: took 12.478505048s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:13:19.682363  982934 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:13:19.682370  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:19.682344  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:19.682732  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:19.682748  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:19.682757  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:19.682764  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:19.682794  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:19.682807  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:19.682817  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:19.682824  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:19.683002  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:19.683032  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:19.683052  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:19.683279  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:19.683297  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:19.683343  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:19.685392  982934 addons.go:475] Verifying addon gcp-auth=true in "addons-881745"
	I0729 13:13:19.686168  982934 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:13:19.686191  982934 node_conditions.go:123] node cpu capacity is 2
	I0729 13:13:19.686220  982934 node_conditions.go:105] duration metric: took 3.851629ms to run NodePressure ...
	I0729 13:13:19.686233  982934 start.go:241] waiting for startup goroutines ...
	I0729 13:13:19.687268  982934 out.go:177] * Verifying gcp-auth addon...
	I0729 13:13:19.689094  982934 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0729 13:13:19.691691  982934 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0729 13:13:19.691715  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:19.776028  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:19.883070  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:19.883150  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:20.193336  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:20.275399  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:20.381528  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:20.383169  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:20.692828  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:20.774717  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:20.882907  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:20.883346  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:21.193379  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:21.275601  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:21.382481  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:21.383526  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:21.692723  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:21.774559  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:21.883035  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:21.883356  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:22.193320  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:22.274340  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:22.381556  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:22.382023  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:22.693234  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:22.775303  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:22.882659  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:22.883039  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:23.193723  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:23.276738  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:23.382422  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:23.383346  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:23.704620  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:23.781894  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:23.884891  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:23.893294  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:24.192235  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:24.275461  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:24.383087  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:24.385480  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:24.693346  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:24.776072  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:24.882338  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:24.882783  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:25.193858  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:25.274918  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:25.381837  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:25.382014  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:25.692786  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:25.775077  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:25.881546  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:25.884162  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:26.193297  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:26.275806  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:26.383907  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:26.385119  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:26.694746  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:26.791765  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:26.886530  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:26.886603  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:27.192888  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:27.275148  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:27.383014  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:27.383912  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:27.693091  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:27.775402  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:27.882482  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:27.882754  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:28.193298  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:28.275721  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:28.382370  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:28.382715  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:28.693369  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:28.775534  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:28.883270  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:28.884568  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:29.192692  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:29.274549  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:29.383853  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:29.383870  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:29.693778  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:29.774962  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:29.881889  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:29.883706  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:30.194648  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:30.274556  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:30.383645  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:30.384043  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:30.693649  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:30.774474  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:30.882968  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:30.883017  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:31.193312  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:31.274816  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:31.383399  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:31.383767  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:31.693327  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:31.777772  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:31.889831  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:31.890169  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:32.430376  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:32.432235  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:32.432842  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:32.437523  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:32.692973  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:32.777587  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:32.881747  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:32.881915  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:33.196437  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:33.276270  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:33.382947  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:33.383273  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:33.693407  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:33.778477  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:33.883075  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:33.883246  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:34.193149  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:34.275056  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:34.381610  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:34.383721  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:34.693092  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:34.776425  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:34.882397  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:34.883899  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:35.193085  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:35.274932  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:35.382489  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:35.383556  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:35.692743  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:35.775705  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:35.882501  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:35.883777  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:36.194101  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:36.276189  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:36.383065  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:36.384728  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:36.868266  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:36.871038  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:36.886866  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:36.891747  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:37.192486  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:37.274478  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:37.382343  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:37.382723  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:37.693014  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:37.775170  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:37.882451  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:37.883211  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:38.193740  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:38.274996  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:38.381925  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:38.382129  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:38.693603  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:38.775499  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:38.881966  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:38.882190  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:39.192804  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:39.274748  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:39.382711  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:39.384368  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:39.692720  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:39.774007  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:39.886819  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:39.887299  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:40.193732  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:40.275047  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:40.381732  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:40.382055  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:40.693046  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:40.774944  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:40.881771  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:40.884187  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:41.193097  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:41.275266  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:41.396081  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:41.396232  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:41.693289  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:41.775228  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:41.881737  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:41.882535  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:42.193251  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:42.278235  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:42.382667  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:42.382891  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:42.693291  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:42.775796  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:42.885294  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:42.886049  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:43.193173  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:43.275752  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:43.384242  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:43.384277  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:43.693753  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:43.775167  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:43.882253  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:43.882752  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:44.193512  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:44.275231  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:44.382334  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:44.382936  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:44.693200  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:44.775041  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:44.883043  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:44.883474  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:45.417393  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:45.417592  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:45.419351  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:45.419352  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:45.692776  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:45.774895  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:45.881496  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:45.881798  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:46.192890  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:46.274909  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:46.381344  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:46.383624  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:46.692754  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:46.774623  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:46.884609  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:46.884714  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:47.193348  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:47.279219  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:47.382443  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:47.382849  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:47.693026  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:47.775452  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:47.882374  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:47.882785  982934 kapi.go:107] duration metric: took 33.005683413s to wait for kubernetes.io/minikube-addons=registry ...
	I0729 13:13:48.192569  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:48.275025  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:48.382143  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:48.692949  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:48.774981  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:48.882246  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:49.193149  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:49.274735  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:49.382049  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:49.693566  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:49.776841  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:49.881970  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:50.194637  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:50.281926  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:50.382819  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:50.709807  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:50.774947  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:50.883668  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:51.193603  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:51.276033  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:51.382834  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:51.693445  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:51.775413  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:51.882181  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:52.193476  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:52.275720  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:52.382834  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:52.692713  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:52.775092  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:52.882646  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:53.192342  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:53.275396  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:53.382363  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:54.114655  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:54.117626  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:54.117763  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:54.195391  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:54.275346  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:54.383382  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:54.693627  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:54.780263  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:54.884220  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:55.194295  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:55.277659  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:55.382526  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:55.693350  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:55.779285  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:55.882317  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:56.193327  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:56.275385  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:56.382755  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:56.692529  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:56.775718  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:56.881488  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:57.193664  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:57.275247  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:57.382825  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:57.720255  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:57.774657  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:57.882202  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:58.193552  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:58.274726  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:58.381957  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:58.692678  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:58.774253  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:58.882263  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:59.193550  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:59.283675  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:59.382216  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:59.693151  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:59.775659  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:59.882075  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:00.193442  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:00.275995  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:00.382454  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:00.695123  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:00.774527  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:00.881518  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:01.193471  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:01.275601  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:01.382683  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:01.692273  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:01.775941  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:02.164538  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:02.192990  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:02.279329  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:02.383399  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:02.693143  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:02.775014  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:02.884251  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:03.193218  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:03.275728  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:03.381920  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:03.693061  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:03.775159  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:03.883212  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:04.192769  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:04.274897  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:04.381707  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:04.693067  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:04.774847  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:04.888252  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:05.193212  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:05.275371  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:05.382645  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:05.692240  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:05.776008  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:05.882068  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:06.194598  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:06.282423  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:06.386209  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:06.693448  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:06.775499  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:06.881497  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:07.199610  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:07.275108  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:07.453674  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:07.692339  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:07.777239  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:07.882681  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:08.192766  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:08.276300  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:08.382147  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:08.693540  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:08.777890  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:08.882444  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:09.193550  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:09.275538  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:09.381928  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:09.693019  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:09.775521  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:09.881737  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:10.194431  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:10.278485  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:10.382681  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:10.702207  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:10.775869  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:10.885357  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:11.193498  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:11.276145  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:11.385000  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:11.692793  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:11.774381  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:11.882056  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:12.193740  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:12.276837  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:12.387087  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:12.693070  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:12.775126  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:12.883143  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:13.201156  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:13.275680  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:13.381929  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:13.693806  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:13.777329  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:13.888690  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:14.199375  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:14.276597  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:14.391566  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:14.693841  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:14.774770  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:14.882446  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:15.193995  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:15.275271  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:15.383013  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:15.694756  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:15.774371  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:15.886564  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:16.193635  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:16.275841  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:16.382157  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:16.693077  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:16.775466  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:16.883432  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:17.192594  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:17.275217  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:17.382102  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:17.701847  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:17.776902  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:17.882190  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:18.193140  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:18.275422  982934 kapi.go:107] duration metric: took 1m1.006107116s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0729 13:14:18.381898  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:18.693507  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:18.881913  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:19.192606  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:19.382432  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:19.693567  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:19.882857  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:20.192887  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:20.382969  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:20.692504  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:20.882778  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:21.194002  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:21.382056  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:21.693117  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:21.882874  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:22.192521  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:22.382879  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:22.694036  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:22.883382  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:23.495876  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:23.496146  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:23.693393  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:23.882894  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:24.193110  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:24.382191  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:24.692722  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:24.882719  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:25.192490  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:25.382512  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:25.693198  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:26.125344  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:26.193682  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:26.384047  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:26.693393  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:26.884223  982934 kapi.go:107] duration metric: took 1m12.006560087s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0729 13:14:27.192375  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:27.693270  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:28.192543  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:28.692834  982934 kapi.go:107] duration metric: took 1m9.003734279s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0729 13:14:28.694584  982934 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-881745 cluster.
	I0729 13:14:28.696034  982934 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0729 13:14:28.697267  982934 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0729 13:14:28.698594  982934 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, metrics-server, inspektor-gadget, nvidia-device-plugin, ingress-dns, helm-tiller, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0729 13:14:28.700296  982934 addons.go:510] duration metric: took 1m21.496411734s for enable addons: enabled=[cloud-spanner storage-provisioner metrics-server inspektor-gadget nvidia-device-plugin ingress-dns helm-tiller yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0729 13:14:28.700348  982934 start.go:246] waiting for cluster config update ...
	I0729 13:14:28.700375  982934 start.go:255] writing updated cluster config ...
	I0729 13:14:28.700691  982934 ssh_runner.go:195] Run: rm -f paused
	I0729 13:14:28.755794  982934 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 13:14:28.757654  982934 out.go:177] * Done! kubectl is now configured to use "addons-881745" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 13:17:58 addons-881745 crio[678]: time="2024-07-29 13:17:58.332577361Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722259078332551088,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7c41539a-d378-4d35-9f72-fc5b4ba50402 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:17:58 addons-881745 crio[678]: time="2024-07-29 13:17:58.333225703Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=91ed2e16-b9e7-4d51-bb54-17f787978616 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:17:58 addons-881745 crio[678]: time="2024-07-29 13:17:58.333299792Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=91ed2e16-b9e7-4d51-bb54-17f787978616 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:17:58 addons-881745 crio[678]: time="2024-07-29 13:17:58.333608081Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03956f3a1a8ac3750c40bbced60c54d9439a663565cb2985230d0c6349ac71e6,PodSandboxId:1e63b5c05279bc65911efc46cb36a4f0870c1de46d1ef3e55684feb2f714e9c7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722259069614795418,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-52v9x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 472c31e0-c2e1-412b-97f0-1556647833f8,},Annotations:map[string]string{io.kubernetes.container.hash: 6574499,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4170db4eaf6862d30bd6cf8dd7df7fab13d695f14f3469054a1a3416d0cb1090,PodSandboxId:6ff338d74a1d65f6c48602e0b890605cb8f7c6e4af93550c2b4dd43e90c387f1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722258929875618849,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 403be47e-4fc8-4b4a-92f4-57da8aa66907,},Annotations:map[string]string{io.kubernete
s.container.hash: a9ca56f8,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd582e26ab76a87df81618e94fd8911acbfa53120eb2ffa43337935feef74266,PodSandboxId:1d9d83f0ae40b93bf932d0aedaf05b8058d90f958605e2698d19722186362211,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722258872239203544,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 687ccff9-8f64-4657-a5
49-dd263a4d8b14,},Annotations:map[string]string{io.kubernetes.container.hash: faf43a33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47676082808be0445d4754f760e2bdb551f8ee495159616efc180e768fb90437,PodSandboxId:d221ea69d4276dea6a4b1cd2b5ed17b45667f1314ad3f625584be89e17f3ff7e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722258829637465966,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5nbcm,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: c04ce8b2-943f-4d1d-afd0-7e1d7d17e36f,},Annotations:map[string]string{io.kubernetes.container.hash: d49ff5d5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9c7c2c07dfe1388af48c648ad63186b6ffd23d651ffa0e698cd5c45c04b5d7,PodSandboxId:b381e1ba8f57616cdf6b71685f775baf346c16b6b016fa6e99d6fb8281155773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722258794805286033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,i
o.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4440573-37d1-4041-a657-1e7338cd83c0,},Annotations:map[string]string{io.kubernetes.container.hash: 26a0b021,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2fdb92d3b939ff0d82e7e669537c4b030011f2cd1ef0b335d664124ed23fb02,PodSandboxId:56c539f8ea97ceec92269fadc899bd84ed43572eeea2fa4f7d088e9054959f63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722258790345619597,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d
8ff4d-bdkkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 563ebe0c-e2ff-41b8-b21f-cdcd975e3c60,},Annotations:map[string]string{io.kubernetes.container.hash: ff128fc9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45d6021d310a9b663fc97ab571c53fcea84ad5fd8a261a58c4d4dc6c7b7e4e37,PodSandboxId:7d2a402f353cd23f877482699c9477c1cbb1adc625741bd158f94dff2fc13b1b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d
01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722258788174948470,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6h84v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 241d7377-0c07-446a-8758-72ee113b1999,},Annotations:map[string]string{io.kubernetes.container.hash: 6ccdd6d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f925986f1ac40320f580812ec0670960f90aef7f94cbe4cdcecac176ec9d3ba8,PodSandboxId:b374475df602822d212ec9948162991d11520a52c57f58326e656c710bbc090a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75
a899,State:CONTAINER_RUNNING,CreatedAt:1722258768602478955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df7eb1b39c9f83f6aef26e5aa39bf30,},Annotations:map[string]string{io.kubernetes.container.hash: 232cf87a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b62cddfa98d32bab6392002bf26a28a2c77262b2bc184c1c946a4fa296c1423,PodSandboxId:71a81d84e6960da9a4615d5f8142d1971ef808083ba1196848b433145a1dd8de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedA
t:1722258768579427222,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dea9dd953528f9555b7b1165a4eb4160,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecaab0389284a5611a2f3f280328a29c744e39a1101a6f90291b6b64a626591e,PodSandboxId:7c94e609457546c62f1cc54fd2c04af39c9a2fbc72d3ac7434868f037ed26816,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:172225876856384
1342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cad020bf239eb10f745c57431ca19780,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b6ec32f0b2844f9f32d047fe7c0558f2ef70335b8bb8412a959550fa92a02c0,PodSandboxId:d8531751d3b1f1efe60e3c9cc0c84dd258cbe682fdc1aee6db7367efa5999249,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722258768538556224,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09433823db087c4b78e1f4444c870f1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=91ed2e16-b9e7-4d51-bb54-17f787978616 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:17:58 addons-881745 crio[678]: time="2024-07-29 13:17:58.370799403Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=833e173c-6532-494d-82e1-2b3aa6974f26 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:17:58 addons-881745 crio[678]: time="2024-07-29 13:17:58.370867779Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=833e173c-6532-494d-82e1-2b3aa6974f26 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:17:58 addons-881745 crio[678]: time="2024-07-29 13:17:58.371656609Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=35f983c3-e353-4884-a265-8bec28310541 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:17:58 addons-881745 crio[678]: time="2024-07-29 13:17:58.372853200Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722259078372829304,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35f983c3-e353-4884-a265-8bec28310541 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:17:58 addons-881745 crio[678]: time="2024-07-29 13:17:58.373369729Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d2a1c03-581b-4e11-9620-4dfee55b547b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:17:58 addons-881745 crio[678]: time="2024-07-29 13:17:58.373438318Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d2a1c03-581b-4e11-9620-4dfee55b547b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:17:58 addons-881745 crio[678]: time="2024-07-29 13:17:58.373674253Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03956f3a1a8ac3750c40bbced60c54d9439a663565cb2985230d0c6349ac71e6,PodSandboxId:1e63b5c05279bc65911efc46cb36a4f0870c1de46d1ef3e55684feb2f714e9c7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722259069614795418,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-52v9x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 472c31e0-c2e1-412b-97f0-1556647833f8,},Annotations:map[string]string{io.kubernetes.container.hash: 6574499,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4170db4eaf6862d30bd6cf8dd7df7fab13d695f14f3469054a1a3416d0cb1090,PodSandboxId:6ff338d74a1d65f6c48602e0b890605cb8f7c6e4af93550c2b4dd43e90c387f1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722258929875618849,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 403be47e-4fc8-4b4a-92f4-57da8aa66907,},Annotations:map[string]string{io.kubernete
s.container.hash: a9ca56f8,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd582e26ab76a87df81618e94fd8911acbfa53120eb2ffa43337935feef74266,PodSandboxId:1d9d83f0ae40b93bf932d0aedaf05b8058d90f958605e2698d19722186362211,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722258872239203544,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 687ccff9-8f64-4657-a5
49-dd263a4d8b14,},Annotations:map[string]string{io.kubernetes.container.hash: faf43a33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47676082808be0445d4754f760e2bdb551f8ee495159616efc180e768fb90437,PodSandboxId:d221ea69d4276dea6a4b1cd2b5ed17b45667f1314ad3f625584be89e17f3ff7e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722258829637465966,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5nbcm,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: c04ce8b2-943f-4d1d-afd0-7e1d7d17e36f,},Annotations:map[string]string{io.kubernetes.container.hash: d49ff5d5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9c7c2c07dfe1388af48c648ad63186b6ffd23d651ffa0e698cd5c45c04b5d7,PodSandboxId:b381e1ba8f57616cdf6b71685f775baf346c16b6b016fa6e99d6fb8281155773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722258794805286033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,i
o.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4440573-37d1-4041-a657-1e7338cd83c0,},Annotations:map[string]string{io.kubernetes.container.hash: 26a0b021,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2fdb92d3b939ff0d82e7e669537c4b030011f2cd1ef0b335d664124ed23fb02,PodSandboxId:56c539f8ea97ceec92269fadc899bd84ed43572eeea2fa4f7d088e9054959f63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722258790345619597,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d
8ff4d-bdkkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 563ebe0c-e2ff-41b8-b21f-cdcd975e3c60,},Annotations:map[string]string{io.kubernetes.container.hash: ff128fc9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45d6021d310a9b663fc97ab571c53fcea84ad5fd8a261a58c4d4dc6c7b7e4e37,PodSandboxId:7d2a402f353cd23f877482699c9477c1cbb1adc625741bd158f94dff2fc13b1b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d
01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722258788174948470,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6h84v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 241d7377-0c07-446a-8758-72ee113b1999,},Annotations:map[string]string{io.kubernetes.container.hash: 6ccdd6d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f925986f1ac40320f580812ec0670960f90aef7f94cbe4cdcecac176ec9d3ba8,PodSandboxId:b374475df602822d212ec9948162991d11520a52c57f58326e656c710bbc090a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75
a899,State:CONTAINER_RUNNING,CreatedAt:1722258768602478955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df7eb1b39c9f83f6aef26e5aa39bf30,},Annotations:map[string]string{io.kubernetes.container.hash: 232cf87a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b62cddfa98d32bab6392002bf26a28a2c77262b2bc184c1c946a4fa296c1423,PodSandboxId:71a81d84e6960da9a4615d5f8142d1971ef808083ba1196848b433145a1dd8de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedA
t:1722258768579427222,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dea9dd953528f9555b7b1165a4eb4160,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecaab0389284a5611a2f3f280328a29c744e39a1101a6f90291b6b64a626591e,PodSandboxId:7c94e609457546c62f1cc54fd2c04af39c9a2fbc72d3ac7434868f037ed26816,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:172225876856384
1342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cad020bf239eb10f745c57431ca19780,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b6ec32f0b2844f9f32d047fe7c0558f2ef70335b8bb8412a959550fa92a02c0,PodSandboxId:d8531751d3b1f1efe60e3c9cc0c84dd258cbe682fdc1aee6db7367efa5999249,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722258768538556224,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09433823db087c4b78e1f4444c870f1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d2a1c03-581b-4e11-9620-4dfee55b547b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:17:58 addons-881745 crio[678]: time="2024-07-29 13:17:58.414021365Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d89b9c03-e650-4404-ba43-ce3526d7dd27 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:17:58 addons-881745 crio[678]: time="2024-07-29 13:17:58.414120553Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d89b9c03-e650-4404-ba43-ce3526d7dd27 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:17:58 addons-881745 crio[678]: time="2024-07-29 13:17:58.415708363Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db4f6333-9f54-41b0-852b-035b95acb260 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:17:58 addons-881745 crio[678]: time="2024-07-29 13:17:58.417066749Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722259078417040232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db4f6333-9f54-41b0-852b-035b95acb260 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:17:58 addons-881745 crio[678]: time="2024-07-29 13:17:58.417869971Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=029ec82c-91c4-43aa-be12-48f5d41c3b09 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:17:58 addons-881745 crio[678]: time="2024-07-29 13:17:58.417944628Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=029ec82c-91c4-43aa-be12-48f5d41c3b09 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:17:58 addons-881745 crio[678]: time="2024-07-29 13:17:58.418475931Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03956f3a1a8ac3750c40bbced60c54d9439a663565cb2985230d0c6349ac71e6,PodSandboxId:1e63b5c05279bc65911efc46cb36a4f0870c1de46d1ef3e55684feb2f714e9c7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722259069614795418,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-52v9x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 472c31e0-c2e1-412b-97f0-1556647833f8,},Annotations:map[string]string{io.kubernetes.container.hash: 6574499,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4170db4eaf6862d30bd6cf8dd7df7fab13d695f14f3469054a1a3416d0cb1090,PodSandboxId:6ff338d74a1d65f6c48602e0b890605cb8f7c6e4af93550c2b4dd43e90c387f1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722258929875618849,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 403be47e-4fc8-4b4a-92f4-57da8aa66907,},Annotations:map[string]string{io.kubernete
s.container.hash: a9ca56f8,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd582e26ab76a87df81618e94fd8911acbfa53120eb2ffa43337935feef74266,PodSandboxId:1d9d83f0ae40b93bf932d0aedaf05b8058d90f958605e2698d19722186362211,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722258872239203544,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 687ccff9-8f64-4657-a5
49-dd263a4d8b14,},Annotations:map[string]string{io.kubernetes.container.hash: faf43a33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47676082808be0445d4754f760e2bdb551f8ee495159616efc180e768fb90437,PodSandboxId:d221ea69d4276dea6a4b1cd2b5ed17b45667f1314ad3f625584be89e17f3ff7e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722258829637465966,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5nbcm,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: c04ce8b2-943f-4d1d-afd0-7e1d7d17e36f,},Annotations:map[string]string{io.kubernetes.container.hash: d49ff5d5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9c7c2c07dfe1388af48c648ad63186b6ffd23d651ffa0e698cd5c45c04b5d7,PodSandboxId:b381e1ba8f57616cdf6b71685f775baf346c16b6b016fa6e99d6fb8281155773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722258794805286033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,i
o.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4440573-37d1-4041-a657-1e7338cd83c0,},Annotations:map[string]string{io.kubernetes.container.hash: 26a0b021,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2fdb92d3b939ff0d82e7e669537c4b030011f2cd1ef0b335d664124ed23fb02,PodSandboxId:56c539f8ea97ceec92269fadc899bd84ed43572eeea2fa4f7d088e9054959f63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722258790345619597,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d
8ff4d-bdkkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 563ebe0c-e2ff-41b8-b21f-cdcd975e3c60,},Annotations:map[string]string{io.kubernetes.container.hash: ff128fc9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45d6021d310a9b663fc97ab571c53fcea84ad5fd8a261a58c4d4dc6c7b7e4e37,PodSandboxId:7d2a402f353cd23f877482699c9477c1cbb1adc625741bd158f94dff2fc13b1b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d
01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722258788174948470,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6h84v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 241d7377-0c07-446a-8758-72ee113b1999,},Annotations:map[string]string{io.kubernetes.container.hash: 6ccdd6d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f925986f1ac40320f580812ec0670960f90aef7f94cbe4cdcecac176ec9d3ba8,PodSandboxId:b374475df602822d212ec9948162991d11520a52c57f58326e656c710bbc090a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75
a899,State:CONTAINER_RUNNING,CreatedAt:1722258768602478955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df7eb1b39c9f83f6aef26e5aa39bf30,},Annotations:map[string]string{io.kubernetes.container.hash: 232cf87a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b62cddfa98d32bab6392002bf26a28a2c77262b2bc184c1c946a4fa296c1423,PodSandboxId:71a81d84e6960da9a4615d5f8142d1971ef808083ba1196848b433145a1dd8de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedA
t:1722258768579427222,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dea9dd953528f9555b7b1165a4eb4160,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecaab0389284a5611a2f3f280328a29c744e39a1101a6f90291b6b64a626591e,PodSandboxId:7c94e609457546c62f1cc54fd2c04af39c9a2fbc72d3ac7434868f037ed26816,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:172225876856384
1342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cad020bf239eb10f745c57431ca19780,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b6ec32f0b2844f9f32d047fe7c0558f2ef70335b8bb8412a959550fa92a02c0,PodSandboxId:d8531751d3b1f1efe60e3c9cc0c84dd258cbe682fdc1aee6db7367efa5999249,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722258768538556224,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09433823db087c4b78e1f4444c870f1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=029ec82c-91c4-43aa-be12-48f5d41c3b09 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:17:58 addons-881745 crio[678]: time="2024-07-29 13:17:58.451496393Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=70450ef5-c14e-4dc5-a02a-68e49cd99627 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:17:58 addons-881745 crio[678]: time="2024-07-29 13:17:58.451582632Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=70450ef5-c14e-4dc5-a02a-68e49cd99627 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:17:58 addons-881745 crio[678]: time="2024-07-29 13:17:58.453024338Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dedc778d-6358-4397-8b3d-6c2aa609eb67 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:17:58 addons-881745 crio[678]: time="2024-07-29 13:17:58.454187955Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722259078454164206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dedc778d-6358-4397-8b3d-6c2aa609eb67 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:17:58 addons-881745 crio[678]: time="2024-07-29 13:17:58.455077705Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=13a42f35-8b08-4708-b860-f155e12bb8db name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:17:58 addons-881745 crio[678]: time="2024-07-29 13:17:58.455135055Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=13a42f35-8b08-4708-b860-f155e12bb8db name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:17:58 addons-881745 crio[678]: time="2024-07-29 13:17:58.455444052Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03956f3a1a8ac3750c40bbced60c54d9439a663565cb2985230d0c6349ac71e6,PodSandboxId:1e63b5c05279bc65911efc46cb36a4f0870c1de46d1ef3e55684feb2f714e9c7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722259069614795418,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-52v9x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 472c31e0-c2e1-412b-97f0-1556647833f8,},Annotations:map[string]string{io.kubernetes.container.hash: 6574499,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4170db4eaf6862d30bd6cf8dd7df7fab13d695f14f3469054a1a3416d0cb1090,PodSandboxId:6ff338d74a1d65f6c48602e0b890605cb8f7c6e4af93550c2b4dd43e90c387f1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722258929875618849,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 403be47e-4fc8-4b4a-92f4-57da8aa66907,},Annotations:map[string]string{io.kubernete
s.container.hash: a9ca56f8,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd582e26ab76a87df81618e94fd8911acbfa53120eb2ffa43337935feef74266,PodSandboxId:1d9d83f0ae40b93bf932d0aedaf05b8058d90f958605e2698d19722186362211,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722258872239203544,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 687ccff9-8f64-4657-a5
49-dd263a4d8b14,},Annotations:map[string]string{io.kubernetes.container.hash: faf43a33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47676082808be0445d4754f760e2bdb551f8ee495159616efc180e768fb90437,PodSandboxId:d221ea69d4276dea6a4b1cd2b5ed17b45667f1314ad3f625584be89e17f3ff7e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722258829637465966,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5nbcm,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: c04ce8b2-943f-4d1d-afd0-7e1d7d17e36f,},Annotations:map[string]string{io.kubernetes.container.hash: d49ff5d5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9c7c2c07dfe1388af48c648ad63186b6ffd23d651ffa0e698cd5c45c04b5d7,PodSandboxId:b381e1ba8f57616cdf6b71685f775baf346c16b6b016fa6e99d6fb8281155773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722258794805286033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,i
o.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4440573-37d1-4041-a657-1e7338cd83c0,},Annotations:map[string]string{io.kubernetes.container.hash: 26a0b021,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2fdb92d3b939ff0d82e7e669537c4b030011f2cd1ef0b335d664124ed23fb02,PodSandboxId:56c539f8ea97ceec92269fadc899bd84ed43572eeea2fa4f7d088e9054959f63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722258790345619597,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d
8ff4d-bdkkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 563ebe0c-e2ff-41b8-b21f-cdcd975e3c60,},Annotations:map[string]string{io.kubernetes.container.hash: ff128fc9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45d6021d310a9b663fc97ab571c53fcea84ad5fd8a261a58c4d4dc6c7b7e4e37,PodSandboxId:7d2a402f353cd23f877482699c9477c1cbb1adc625741bd158f94dff2fc13b1b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d
01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722258788174948470,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6h84v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 241d7377-0c07-446a-8758-72ee113b1999,},Annotations:map[string]string{io.kubernetes.container.hash: 6ccdd6d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f925986f1ac40320f580812ec0670960f90aef7f94cbe4cdcecac176ec9d3ba8,PodSandboxId:b374475df602822d212ec9948162991d11520a52c57f58326e656c710bbc090a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75
a899,State:CONTAINER_RUNNING,CreatedAt:1722258768602478955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df7eb1b39c9f83f6aef26e5aa39bf30,},Annotations:map[string]string{io.kubernetes.container.hash: 232cf87a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b62cddfa98d32bab6392002bf26a28a2c77262b2bc184c1c946a4fa296c1423,PodSandboxId:71a81d84e6960da9a4615d5f8142d1971ef808083ba1196848b433145a1dd8de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedA
t:1722258768579427222,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dea9dd953528f9555b7b1165a4eb4160,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecaab0389284a5611a2f3f280328a29c744e39a1101a6f90291b6b64a626591e,PodSandboxId:7c94e609457546c62f1cc54fd2c04af39c9a2fbc72d3ac7434868f037ed26816,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:172225876856384
1342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cad020bf239eb10f745c57431ca19780,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b6ec32f0b2844f9f32d047fe7c0558f2ef70335b8bb8412a959550fa92a02c0,PodSandboxId:d8531751d3b1f1efe60e3c9cc0c84dd258cbe682fdc1aee6db7367efa5999249,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722258768538556224,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09433823db087c4b78e1f4444c870f1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=13a42f35-8b08-4708-b860-f155e12bb8db name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	03956f3a1a8ac       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   8 seconds ago       Running             hello-world-app           0                   1e63b5c05279b       hello-world-app-6778b5fc9f-52v9x
	4170db4eaf686       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         2 minutes ago       Running             nginx                     0                   6ff338d74a1d6       nginx
	dd582e26ab76a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     3 minutes ago       Running             busybox                   0                   1d9d83f0ae40b       busybox
	47676082808be       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   4 minutes ago       Running             metrics-server            0                   d221ea69d4276       metrics-server-c59844bb4-5nbcm
	5a9c7c2c07dfe       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        4 minutes ago       Running             storage-provisioner       0                   b381e1ba8f576       storage-provisioner
	a2fdb92d3b939       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        4 minutes ago       Running             coredns                   0                   56c539f8ea97c       coredns-7db6d8ff4d-bdkkm
	45d6021d310a9       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                        4 minutes ago       Running             kube-proxy                0                   7d2a402f353cd       kube-proxy-6h84v
	f925986f1ac40       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        5 minutes ago       Running             etcd                      0                   b374475df6028       etcd-addons-881745
	8b62cddfa98d3       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                        5 minutes ago       Running             kube-scheduler            0                   71a81d84e6960       kube-scheduler-addons-881745
	ecaab0389284a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                        5 minutes ago       Running             kube-apiserver            0                   7c94e60945754       kube-apiserver-addons-881745
	8b6ec32f0b284       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                        5 minutes ago       Running             kube-controller-manager   0                   d8531751d3b1f       kube-controller-manager-addons-881745
	
	
	==> coredns [a2fdb92d3b939ff0d82e7e669537c4b030011f2cd1ef0b335d664124ed23fb02] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:38998 - 21548 "HINFO IN 7096452714176680965.4973821805623616437. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011702965s
	[INFO] 10.244.0.22:36835 - 26671 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000519466s
	[INFO] 10.244.0.22:33177 - 62168 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000403977s
	[INFO] 10.244.0.22:48829 - 46584 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000114311s
	[INFO] 10.244.0.22:55456 - 18332 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000062959s
	[INFO] 10.244.0.22:48083 - 37568 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000082s
	[INFO] 10.244.0.22:39012 - 24425 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000051648s
	[INFO] 10.244.0.22:34952 - 52667 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000405899s
	[INFO] 10.244.0.22:49399 - 47521 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.000900806s
	[INFO] 10.244.0.27:59072 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000475802s
	[INFO] 10.244.0.27:53032 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000175396s
	
	
	==> describe nodes <==
	Name:               addons-881745
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-881745
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=addons-881745
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T13_12_54_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-881745
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:12:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-881745
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:17:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:16:27 +0000   Mon, 29 Jul 2024 13:12:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:16:27 +0000   Mon, 29 Jul 2024 13:12:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:16:27 +0000   Mon, 29 Jul 2024 13:12:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:16:27 +0000   Mon, 29 Jul 2024 13:12:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.103
	  Hostname:    addons-881745
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 0a5e6478103449d8ba6fc5473dbf0772
	  System UUID:                0a5e6478-1034-49d8-ba6f-c5473dbf0772
	  Boot ID:                    b717f393-ac51-46d7-8b5d-796bba867582
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  default                     hello-world-app-6778b5fc9f-52v9x         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 coredns-7db6d8ff4d-bdkkm                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m51s
	  kube-system                 etcd-addons-881745                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m6s
	  kube-system                 kube-apiserver-addons-881745             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 kube-controller-manager-addons-881745    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-proxy-6h84v                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 kube-scheduler-addons-881745             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 metrics-server-c59844bb4-5nbcm           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m46s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m49s  kube-proxy       
	  Normal  Starting                 5m5s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m5s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m5s   kubelet          Node addons-881745 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m5s   kubelet          Node addons-881745 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m5s   kubelet          Node addons-881745 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m4s   kubelet          Node addons-881745 status is now: NodeReady
	  Normal  RegisteredNode           4m52s  node-controller  Node addons-881745 event: Registered Node addons-881745 in Controller
	
	
	==> dmesg <==
	[  +5.006184] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.291589] kauditd_printk_skb: 102 callbacks suppressed
	[  +9.171223] kauditd_printk_skb: 5 callbacks suppressed
	[ +10.374865] kauditd_printk_skb: 6 callbacks suppressed
	[ +16.488589] kauditd_printk_skb: 28 callbacks suppressed
	[Jul29 13:14] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.191742] kauditd_printk_skb: 65 callbacks suppressed
	[  +6.730585] kauditd_printk_skb: 36 callbacks suppressed
	[  +6.099425] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.021612] kauditd_printk_skb: 29 callbacks suppressed
	[ +10.797226] kauditd_printk_skb: 30 callbacks suppressed
	[ +13.871161] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.182947] kauditd_printk_skb: 25 callbacks suppressed
	[Jul29 13:15] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.935504] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.330593] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.475228] kauditd_printk_skb: 6 callbacks suppressed
	[ +15.357543] kauditd_printk_skb: 13 callbacks suppressed
	[ +11.350239] kauditd_printk_skb: 15 callbacks suppressed
	[  +7.838927] kauditd_printk_skb: 33 callbacks suppressed
	[Jul29 13:16] kauditd_printk_skb: 34 callbacks suppressed
	[  +7.772190] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.942873] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.054606] kauditd_printk_skb: 16 callbacks suppressed
	[Jul29 13:17] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [f925986f1ac40320f580812ec0670960f90aef7f94cbe4cdcecac176ec9d3ba8] <==
	{"level":"info","ts":"2024-07-29T13:13:57.69663Z","caller":"traceutil/trace.go:171","msg":"trace[1029002797] transaction","detail":"{read_only:false; response_revision:957; number_of_response:1; }","duration":"205.056974ms","start":"2024-07-29T13:13:57.491559Z","end":"2024-07-29T13:13:57.696616Z","steps":["trace[1029002797] 'process raft request'  (duration: 204.957065ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:14:02.150723Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"280.436551ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14077"}
	{"level":"info","ts":"2024-07-29T13:14:02.150809Z","caller":"traceutil/trace.go:171","msg":"trace[1373360230] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:973; }","duration":"280.545067ms","start":"2024-07-29T13:14:01.87024Z","end":"2024-07-29T13:14:02.150785Z","steps":["trace[1373360230] 'range keys from in-memory index tree'  (duration: 280.204223ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:14:02.151045Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"331.938543ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T13:14:02.15107Z","caller":"traceutil/trace.go:171","msg":"trace[1781667686] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:973; }","duration":"331.984624ms","start":"2024-07-29T13:14:01.819076Z","end":"2024-07-29T13:14:02.15106Z","steps":["trace[1781667686] 'range keys from in-memory index tree'  (duration: 331.902867ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:14:02.151086Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T13:14:01.819062Z","time spent":"332.02052ms","remote":"127.0.0.1:35970","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-07-29T13:14:17.683248Z","caller":"traceutil/trace.go:171","msg":"trace[1343931408] transaction","detail":"{read_only:false; response_revision:1105; number_of_response:1; }","duration":"219.092595ms","start":"2024-07-29T13:14:17.464087Z","end":"2024-07-29T13:14:17.683179Z","steps":["trace[1343931408] 'process raft request'  (duration: 218.713619ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T13:14:23.480985Z","caller":"traceutil/trace.go:171","msg":"trace[1021386678] linearizableReadLoop","detail":"{readStateIndex:1155; appliedIndex:1154; }","duration":"300.379644ms","start":"2024-07-29T13:14:23.180593Z","end":"2024-07-29T13:14:23.480973Z","steps":["trace[1021386678] 'read index received'  (duration: 300.202036ms)","trace[1021386678] 'applied index is now lower than readState.Index'  (duration: 177.194µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T13:14:23.481196Z","caller":"traceutil/trace.go:171","msg":"trace[2130197719] transaction","detail":"{read_only:false; response_revision:1120; number_of_response:1; }","duration":"359.298556ms","start":"2024-07-29T13:14:23.121887Z","end":"2024-07-29T13:14:23.481186Z","steps":["trace[2130197719] 'process raft request'  (duration: 358.951144ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:14:23.481282Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T13:14:23.121869Z","time spent":"359.352242ms","remote":"127.0.0.1:36216","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-bbeot3uwmws2c5asxatwkevkx4\" mod_revision:1070 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-bbeot3uwmws2c5asxatwkevkx4\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-bbeot3uwmws2c5asxatwkevkx4\" > >"}
	{"level":"warn","ts":"2024-07-29T13:14:23.481578Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"300.971484ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-07-29T13:14:23.481626Z","caller":"traceutil/trace.go:171","msg":"trace[1949183600] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1120; }","duration":"301.049246ms","start":"2024-07-29T13:14:23.18057Z","end":"2024-07-29T13:14:23.481619Z","steps":["trace[1949183600] 'agreement among raft nodes before linearized reading'  (duration: 300.92207ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:14:23.481649Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T13:14:23.180558Z","time spent":"301.085724ms","remote":"127.0.0.1:36160","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11476,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-07-29T13:14:23.482179Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.432944ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-07-29T13:14:23.482206Z","caller":"traceutil/trace.go:171","msg":"trace[669980208] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1120; }","duration":"113.503204ms","start":"2024-07-29T13:14:23.368695Z","end":"2024-07-29T13:14:23.482199Z","steps":["trace[669980208] 'agreement among raft nodes before linearized reading'  (duration: 113.408179ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T13:14:26.108411Z","caller":"traceutil/trace.go:171","msg":"trace[1408556459] linearizableReadLoop","detail":"{readStateIndex:1161; appliedIndex:1160; }","duration":"239.843161ms","start":"2024-07-29T13:14:25.868552Z","end":"2024-07-29T13:14:26.108395Z","steps":["trace[1408556459] 'read index received'  (duration: 239.558095ms)","trace[1408556459] 'applied index is now lower than readState.Index'  (duration: 284.482µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T13:14:26.108646Z","caller":"traceutil/trace.go:171","msg":"trace[707371727] transaction","detail":"{read_only:false; response_revision:1126; number_of_response:1; }","duration":"270.733193ms","start":"2024-07-29T13:14:25.837897Z","end":"2024-07-29T13:14:26.10863Z","steps":["trace[707371727] 'process raft request'  (duration: 270.339316ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:14:26.109629Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"241.071853ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-07-29T13:14:26.1106Z","caller":"traceutil/trace.go:171","msg":"trace[959854237] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1126; }","duration":"242.057142ms","start":"2024-07-29T13:14:25.868532Z","end":"2024-07-29T13:14:26.110589Z","steps":["trace[959854237] 'agreement among raft nodes before linearized reading'  (duration: 240.046814ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T13:15:16.623674Z","caller":"traceutil/trace.go:171","msg":"trace[1473155785] linearizableReadLoop","detail":"{readStateIndex:1507; appliedIndex:1506; }","duration":"264.895847ms","start":"2024-07-29T13:15:16.358742Z","end":"2024-07-29T13:15:16.623638Z","steps":["trace[1473155785] 'read index received'  (duration: 264.75084ms)","trace[1473155785] 'applied index is now lower than readState.Index'  (duration: 144.478µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T13:15:16.623949Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.131855ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-07-29T13:15:16.623977Z","caller":"traceutil/trace.go:171","msg":"trace[2087636275] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1457; }","duration":"265.250517ms","start":"2024-07-29T13:15:16.358717Z","end":"2024-07-29T13:15:16.623968Z","steps":["trace[2087636275] 'agreement among raft nodes before linearized reading'  (duration: 265.048614ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T13:15:16.62419Z","caller":"traceutil/trace.go:171","msg":"trace[2017358641] transaction","detail":"{read_only:false; response_revision:1457; number_of_response:1; }","duration":"370.893701ms","start":"2024-07-29T13:15:16.253225Z","end":"2024-07-29T13:15:16.624119Z","steps":["trace[2017358641] 'process raft request'  (duration: 370.312955ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:15:16.624419Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T13:15:16.253207Z","time spent":"371.022584ms","remote":"127.0.0.1:36216","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":540,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-881745\" mod_revision:1362 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-881745\" value_size:486 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-881745\" > >"}
	{"level":"info","ts":"2024-07-29T13:16:33.312023Z","caller":"traceutil/trace.go:171","msg":"trace[854009986] transaction","detail":"{read_only:false; response_revision:1923; number_of_response:1; }","duration":"216.698706ms","start":"2024-07-29T13:16:33.095298Z","end":"2024-07-29T13:16:33.311997Z","steps":["trace[854009986] 'process raft request'  (duration: 216.610463ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:17:58 up 5 min,  0 users,  load average: 1.25, 1.13, 0.58
	Linux addons-881745 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ecaab0389284a5611a2f3f280328a29c744e39a1101a6f90291b6b64a626591e] <==
	E0729 13:14:53.612845       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.73.95:443/apis/metrics.k8s.io/v1beta1: Get "https://10.106.73.95:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.106.73.95:443: connect: connection refused
	I0729 13:14:53.682582       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0729 13:15:21.749260       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	E0729 13:15:22.639670       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	W0729 13:15:22.811221       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0729 13:15:23.272706       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0729 13:15:27.266602       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0729 13:15:27.445143       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.253.94"}
	I0729 13:15:57.220713       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 13:15:57.220899       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 13:15:57.254596       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 13:15:57.254662       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 13:15:57.276134       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 13:15:57.276203       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 13:15:57.316264       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 13:15:57.316386       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 13:15:57.370217       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 13:15:57.370241       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 13:15:57.902921       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.245.40"}
	W0729 13:15:58.318157       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0729 13:15:58.370735       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0729 13:15:58.380218       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0729 13:16:15.405950       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.103:8443->10.244.0.32:36038: read: connection reset by peer
	I0729 13:17:48.412840       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.139.115"}
	E0729 13:17:49.912031       1 watch.go:250] http2: stream closed
	
	
	==> kube-controller-manager [8b6ec32f0b2844f9f32d047fe7c0558f2ef70335b8bb8412a959550fa92a02c0] <==
	W0729 13:16:40.682187       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 13:16:40.682218       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 13:16:48.294902       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 13:16:48.294979       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 13:17:09.394733       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 13:17:09.395022       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 13:17:19.233586       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 13:17:19.233653       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 13:17:24.591293       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 13:17:24.591448       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 13:17:43.350573       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 13:17:43.350667       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 13:17:44.775678       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 13:17:44.775709       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0729 13:17:48.271820       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="45.555189ms"
	I0729 13:17:48.281063       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="8.501999ms"
	I0729 13:17:48.281569       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="32.26µs"
	I0729 13:17:48.291670       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="25.132µs"
	I0729 13:17:49.882734       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="9.664032ms"
	I0729 13:17:49.883472       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="29.391µs"
	I0729 13:17:50.504007       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0729 13:17:50.508395       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="4.921µs"
	I0729 13:17:50.513811       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	W0729 13:17:53.182592       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 13:17:53.182660       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [45d6021d310a9b663fc97ab571c53fcea84ad5fd8a261a58c4d4dc6c7b7e4e37] <==
	I0729 13:13:08.803380       1 server_linux.go:69] "Using iptables proxy"
	I0729 13:13:08.840503       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.103"]
	I0729 13:13:08.936202       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 13:13:08.936258       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 13:13:08.936275       1 server_linux.go:165] "Using iptables Proxier"
	I0729 13:13:08.939384       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 13:13:08.939573       1 server.go:872] "Version info" version="v1.30.3"
	I0729 13:13:08.939604       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:13:08.941034       1 config.go:192] "Starting service config controller"
	I0729 13:13:08.941068       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 13:13:08.941094       1 config.go:101] "Starting endpoint slice config controller"
	I0729 13:13:08.941098       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 13:13:08.941639       1 config.go:319] "Starting node config controller"
	I0729 13:13:08.941668       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 13:13:09.041451       1 shared_informer.go:320] Caches are synced for service config
	I0729 13:13:09.041466       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 13:13:09.043253       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8b62cddfa98d32bab6392002bf26a28a2c77262b2bc184c1c946a4fa296c1423] <==
	E0729 13:12:50.960489       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 13:12:50.960684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 13:12:50.960778       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 13:12:50.963141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 13:12:51.818777       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 13:12:51.818824       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 13:12:51.824985       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 13:12:51.825059       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 13:12:51.831160       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 13:12:51.831220       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 13:12:51.904670       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 13:12:51.904718       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 13:12:51.914304       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 13:12:51.914386       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 13:12:51.926485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 13:12:51.926526       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 13:12:51.992542       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 13:12:51.992589       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 13:12:52.004912       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 13:12:52.004966       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 13:12:52.019161       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 13:12:52.019207       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 13:12:52.044210       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 13:12:52.044298       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0729 13:12:53.852153       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 13:17:49 addons-881745 kubelet[1267]: I0729 13:17:49.879506    1267 scope.go:117] "RemoveContainer" containerID="a71808f4ad04f73bea114119c585f30cda1a61a736c90cb974e21fab6325550a"
	Jul 29 13:17:49 addons-881745 kubelet[1267]: E0729 13:17:49.880165    1267 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a71808f4ad04f73bea114119c585f30cda1a61a736c90cb974e21fab6325550a\": container with ID starting with a71808f4ad04f73bea114119c585f30cda1a61a736c90cb974e21fab6325550a not found: ID does not exist" containerID="a71808f4ad04f73bea114119c585f30cda1a61a736c90cb974e21fab6325550a"
	Jul 29 13:17:49 addons-881745 kubelet[1267]: I0729 13:17:49.880201    1267 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a71808f4ad04f73bea114119c585f30cda1a61a736c90cb974e21fab6325550a"} err="failed to get container status \"a71808f4ad04f73bea114119c585f30cda1a61a736c90cb974e21fab6325550a\": rpc error: code = NotFound desc = could not find container \"a71808f4ad04f73bea114119c585f30cda1a61a736c90cb974e21fab6325550a\": container with ID starting with a71808f4ad04f73bea114119c585f30cda1a61a736c90cb974e21fab6325550a not found: ID does not exist"
	Jul 29 13:17:49 addons-881745 kubelet[1267]: I0729 13:17:49.895544    1267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-52v9x" podStartSLOduration=1.14541889 podStartE2EDuration="1.89550592s" podCreationTimestamp="2024-07-29 13:17:48 +0000 UTC" firstStartedPulling="2024-07-29 13:17:48.853293087 +0000 UTC m=+295.713161289" lastFinishedPulling="2024-07-29 13:17:49.603380127 +0000 UTC m=+296.463248319" observedRunningTime="2024-07-29 13:17:49.872785151 +0000 UTC m=+296.732653362" watchObservedRunningTime="2024-07-29 13:17:49.89550592 +0000 UTC m=+296.755374132"
	Jul 29 13:17:51 addons-881745 kubelet[1267]: I0729 13:17:51.297998    1267 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cfd01c1-a81c-485e-96e1-c9b923f9a7a6" path="/var/lib/kubelet/pods/0cfd01c1-a81c-485e-96e1-c9b923f9a7a6/volumes"
	Jul 29 13:17:51 addons-881745 kubelet[1267]: I0729 13:17:51.298486    1267 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3782b3a7-db92-4e0b-9a46-55e0b7de43be" path="/var/lib/kubelet/pods/3782b3a7-db92-4e0b-9a46-55e0b7de43be/volumes"
	Jul 29 13:17:51 addons-881745 kubelet[1267]: I0729 13:17:51.298875    1267 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2552e77-5f20-4297-939a-ecb25c5ae5a4" path="/var/lib/kubelet/pods/f2552e77-5f20-4297-939a-ecb25c5ae5a4/volumes"
	Jul 29 13:17:53 addons-881745 kubelet[1267]: E0729 13:17:53.316891    1267 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:17:53 addons-881745 kubelet[1267]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:17:53 addons-881745 kubelet[1267]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:17:53 addons-881745 kubelet[1267]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:17:53 addons-881745 kubelet[1267]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:17:53 addons-881745 kubelet[1267]: I0729 13:17:53.782486    1267 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-967lj\" (UniqueName: \"kubernetes.io/projected/7cdccdd9-4b11-4ec3-935d-c1e1bcd1eb6f-kube-api-access-967lj\") pod \"7cdccdd9-4b11-4ec3-935d-c1e1bcd1eb6f\" (UID: \"7cdccdd9-4b11-4ec3-935d-c1e1bcd1eb6f\") "
	Jul 29 13:17:53 addons-881745 kubelet[1267]: I0729 13:17:53.782549    1267 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7cdccdd9-4b11-4ec3-935d-c1e1bcd1eb6f-webhook-cert\") pod \"7cdccdd9-4b11-4ec3-935d-c1e1bcd1eb6f\" (UID: \"7cdccdd9-4b11-4ec3-935d-c1e1bcd1eb6f\") "
	Jul 29 13:17:53 addons-881745 kubelet[1267]: I0729 13:17:53.785943    1267 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cdccdd9-4b11-4ec3-935d-c1e1bcd1eb6f-kube-api-access-967lj" (OuterVolumeSpecName: "kube-api-access-967lj") pod "7cdccdd9-4b11-4ec3-935d-c1e1bcd1eb6f" (UID: "7cdccdd9-4b11-4ec3-935d-c1e1bcd1eb6f"). InnerVolumeSpecName "kube-api-access-967lj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 29 13:17:53 addons-881745 kubelet[1267]: I0729 13:17:53.787481    1267 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cdccdd9-4b11-4ec3-935d-c1e1bcd1eb6f-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7cdccdd9-4b11-4ec3-935d-c1e1bcd1eb6f" (UID: "7cdccdd9-4b11-4ec3-935d-c1e1bcd1eb6f"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 29 13:17:53 addons-881745 kubelet[1267]: I0729 13:17:53.871019    1267 scope.go:117] "RemoveContainer" containerID="d00867f82ca9c9fe415ef76b11fadd6b9f0baeb9299f02c43ef0380fd0ed49d7"
	Jul 29 13:17:53 addons-881745 kubelet[1267]: I0729 13:17:53.884132    1267 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-967lj\" (UniqueName: \"kubernetes.io/projected/7cdccdd9-4b11-4ec3-935d-c1e1bcd1eb6f-kube-api-access-967lj\") on node \"addons-881745\" DevicePath \"\""
	Jul 29 13:17:53 addons-881745 kubelet[1267]: I0729 13:17:53.884152    1267 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7cdccdd9-4b11-4ec3-935d-c1e1bcd1eb6f-webhook-cert\") on node \"addons-881745\" DevicePath \"\""
	Jul 29 13:17:53 addons-881745 kubelet[1267]: I0729 13:17:53.892444    1267 scope.go:117] "RemoveContainer" containerID="d00867f82ca9c9fe415ef76b11fadd6b9f0baeb9299f02c43ef0380fd0ed49d7"
	Jul 29 13:17:53 addons-881745 kubelet[1267]: E0729 13:17:53.893004    1267 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d00867f82ca9c9fe415ef76b11fadd6b9f0baeb9299f02c43ef0380fd0ed49d7\": container with ID starting with d00867f82ca9c9fe415ef76b11fadd6b9f0baeb9299f02c43ef0380fd0ed49d7 not found: ID does not exist" containerID="d00867f82ca9c9fe415ef76b11fadd6b9f0baeb9299f02c43ef0380fd0ed49d7"
	Jul 29 13:17:53 addons-881745 kubelet[1267]: I0729 13:17:53.893080    1267 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d00867f82ca9c9fe415ef76b11fadd6b9f0baeb9299f02c43ef0380fd0ed49d7"} err="failed to get container status \"d00867f82ca9c9fe415ef76b11fadd6b9f0baeb9299f02c43ef0380fd0ed49d7\": rpc error: code = NotFound desc = could not find container \"d00867f82ca9c9fe415ef76b11fadd6b9f0baeb9299f02c43ef0380fd0ed49d7\": container with ID starting with d00867f82ca9c9fe415ef76b11fadd6b9f0baeb9299f02c43ef0380fd0ed49d7 not found: ID does not exist"
	Jul 29 13:17:54 addons-881745 kubelet[1267]: I0729 13:17:54.874682    1267 scope.go:117] "RemoveContainer" containerID="07fc855342932a22624138ff357f18546a242db912116e0725ff5c21462214aa"
	Jul 29 13:17:54 addons-881745 kubelet[1267]: I0729 13:17:54.899342    1267 scope.go:117] "RemoveContainer" containerID="3992fd665362212919fa33098f1a4b914227e4a9cf51702a1da0974768fa5d30"
	Jul 29 13:17:55 addons-881745 kubelet[1267]: I0729 13:17:55.298865    1267 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cdccdd9-4b11-4ec3-935d-c1e1bcd1eb6f" path="/var/lib/kubelet/pods/7cdccdd9-4b11-4ec3-935d-c1e1bcd1eb6f/volumes"
	
	
	==> storage-provisioner [5a9c7c2c07dfe1388af48c648ad63186b6ffd23d651ffa0e698cd5c45c04b5d7] <==
	I0729 13:13:15.947881       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 13:13:15.968012       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 13:13:15.968210       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 13:13:15.983759       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 13:13:15.984899       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"78a26d8d-8c16-4971-9e86-a7a7d4f19145", APIVersion:"v1", ResourceVersion:"716", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-881745_aa192ac5-bb3a-4e9d-bba3-13da7fa7b8e2 became leader
	I0729 13:13:15.985405       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-881745_aa192ac5-bb3a-4e9d-bba3-13da7fa7b8e2!
	I0729 13:13:16.086137       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-881745_aa192ac5-bb3a-4e9d-bba3-13da7fa7b8e2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-881745 -n addons-881745
helpers_test.go:261: (dbg) Run:  kubectl --context addons-881745 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.53s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (344.24s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.27259ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-5nbcm" [c04ce8b2-943f-4d1d-afd0-7e1d7d17e36f] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004209344s
addons_test.go:417: (dbg) Run:  kubectl --context addons-881745 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-881745 top pods -n kube-system: exit status 1 (72.840688ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-881745, age: 2m12.978273163s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-881745 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-881745 top pods -n kube-system: exit status 1 (70.345072ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-bdkkm, age: 2m0.712655097s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-881745 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-881745 top pods -n kube-system: exit status 1 (66.509855ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-bdkkm, age: 2m3.308297957s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-881745 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-881745 top pods -n kube-system: exit status 1 (70.745476ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-bdkkm, age: 2m6.88304669s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-881745 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-881745 top pods -n kube-system: exit status 1 (68.536211ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-bdkkm, age: 2m14.534528066s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-881745 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-881745 top pods -n kube-system: exit status 1 (65.728802ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-bdkkm, age: 2m24.145046024s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-881745 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-881745 top pods -n kube-system: exit status 1 (83.448258ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-bdkkm, age: 2m42.846701736s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-881745 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-881745 top pods -n kube-system: exit status 1 (66.40785ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-bdkkm, age: 3m5.324286648s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-881745 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-881745 top pods -n kube-system: exit status 1 (67.92279ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-bdkkm, age: 4m18.116748432s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-881745 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-881745 top pods -n kube-system: exit status 1 (67.100582ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-bdkkm, age: 5m16.479924839s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-881745 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-881745 top pods -n kube-system: exit status 1 (64.87925ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-bdkkm, age: 6m6.083845171s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-881745 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-881745 top pods -n kube-system: exit status 1 (69.521505ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-bdkkm, age: 7m33.463465684s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-881745 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-881745 -n addons-881745
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-881745 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-881745 logs -n 25: (1.292220532s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-960541                                                                     | download-only-960541 | jenkins | v1.33.1 | 29 Jul 24 13:12 UTC | 29 Jul 24 13:12 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-696925 | jenkins | v1.33.1 | 29 Jul 24 13:12 UTC |                     |
	|         | binary-mirror-696925                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:33915                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-696925                                                                     | binary-mirror-696925 | jenkins | v1.33.1 | 29 Jul 24 13:12 UTC | 29 Jul 24 13:12 UTC |
	| addons  | enable dashboard -p                                                                         | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:12 UTC |                     |
	|         | addons-881745                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:12 UTC |                     |
	|         | addons-881745                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-881745 --wait=true                                                                | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:12 UTC | 29 Jul 24 13:14 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-881745 addons disable                                                                | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:14 UTC | 29 Jul 24 13:14 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-881745 ssh cat                                                                       | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:15 UTC | 29 Jul 24 13:15 UTC |
	|         | /opt/local-path-provisioner/pvc-2f86f84c-0179-4267-9abb-37e36ba02c83_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-881745 addons disable                                                                | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:15 UTC | 29 Jul 24 13:15 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-881745 ip                                                                            | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:15 UTC | 29 Jul 24 13:15 UTC |
	| addons  | addons-881745 addons disable                                                                | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:15 UTC | 29 Jul 24 13:15 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:15 UTC | 29 Jul 24 13:15 UTC |
	|         | addons-881745                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-881745 ssh curl -s                                                                   | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:15 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-881745 addons                                                                        | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:15 UTC | 29 Jul 24 13:15 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:15 UTC | 29 Jul 24 13:15 UTC |
	|         | addons-881745                                                                               |                      |         |         |                     |                     |
	| addons  | addons-881745 addons                                                                        | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:15 UTC | 29 Jul 24 13:15 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:15 UTC | 29 Jul 24 13:15 UTC |
	|         | -p addons-881745                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-881745 addons disable                                                                | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:16 UTC | 29 Jul 24 13:16 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-881745 addons disable                                                                | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:16 UTC | 29 Jul 24 13:16 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-881745 addons disable                                                                | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:16 UTC | 29 Jul 24 13:16 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:16 UTC | 29 Jul 24 13:16 UTC |
	|         | -p addons-881745                                                                            |                      |         |         |                     |                     |
	| ip      | addons-881745 ip                                                                            | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:17 UTC | 29 Jul 24 13:17 UTC |
	| addons  | addons-881745 addons disable                                                                | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:17 UTC | 29 Jul 24 13:17 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-881745 addons disable                                                                | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:17 UTC | 29 Jul 24 13:17 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-881745 addons                                                                        | addons-881745        | jenkins | v1.33.1 | 29 Jul 24 13:20 UTC | 29 Jul 24 13:20 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 13:12:11
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 13:12:11.880480  982934 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:12:11.880621  982934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:12:11.880631  982934 out.go:304] Setting ErrFile to fd 2...
	I0729 13:12:11.880635  982934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:12:11.880835  982934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 13:12:11.881440  982934 out.go:298] Setting JSON to false
	I0729 13:12:11.882512  982934 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10484,"bootTime":1722248248,"procs":388,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:12:11.882577  982934 start.go:139] virtualization: kvm guest
	I0729 13:12:11.884635  982934 out.go:177] * [addons-881745] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:12:11.885929  982934 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 13:12:11.886010  982934 notify.go:220] Checking for updates...
	I0729 13:12:11.888451  982934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:12:11.889593  982934 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 13:12:11.890718  982934 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 13:12:11.891916  982934 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:12:11.893247  982934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:12:11.894499  982934 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:12:11.925062  982934 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 13:12:11.926166  982934 start.go:297] selected driver: kvm2
	I0729 13:12:11.926189  982934 start.go:901] validating driver "kvm2" against <nil>
	I0729 13:12:11.926205  982934 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:12:11.926915  982934 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:12:11.927004  982934 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19338-974764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 13:12:11.941706  982934 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 13:12:11.941748  982934 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 13:12:11.941939  982934 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:12:11.941963  982934 cni.go:84] Creating CNI manager for ""
	I0729 13:12:11.941969  982934 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:12:11.941975  982934 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 13:12:11.942024  982934 start.go:340] cluster config:
	{Name:addons-881745 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-881745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:12:11.942115  982934 iso.go:125] acquiring lock: {Name:mk2bc72146110e230952d77b90cad2ea8182c9d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:12:11.943791  982934 out.go:177] * Starting "addons-881745" primary control-plane node in "addons-881745" cluster
	I0729 13:12:11.944861  982934 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:12:11.944886  982934 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 13:12:11.944896  982934 cache.go:56] Caching tarball of preloaded images
	I0729 13:12:11.944970  982934 preload.go:172] Found /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:12:11.944989  982934 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 13:12:11.945264  982934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/config.json ...
	I0729 13:12:11.945289  982934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/config.json: {Name:mk1324dba7512e03a30b119e27dd3470d567c772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:12:11.945415  982934 start.go:360] acquireMachinesLock for addons-881745: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:12:11.945457  982934 start.go:364] duration metric: took 30.275µs to acquireMachinesLock for "addons-881745"
	I0729 13:12:11.945475  982934 start.go:93] Provisioning new machine with config: &{Name:addons-881745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-881745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:12:11.945526  982934 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 13:12:11.946915  982934 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0729 13:12:11.947026  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:12:11.947059  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:12:11.960691  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41029
	I0729 13:12:11.961091  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:12:11.961698  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:12:11.961720  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:12:11.962039  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:12:11.962224  982934 main.go:141] libmachine: (addons-881745) Calling .GetMachineName
	I0729 13:12:11.962339  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:12:11.962476  982934 start.go:159] libmachine.API.Create for "addons-881745" (driver="kvm2")
	I0729 13:12:11.962507  982934 client.go:168] LocalClient.Create starting
	I0729 13:12:11.962545  982934 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem
	I0729 13:12:12.155740  982934 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem
	I0729 13:12:12.217284  982934 main.go:141] libmachine: Running pre-create checks...
	I0729 13:12:12.217308  982934 main.go:141] libmachine: (addons-881745) Calling .PreCreateCheck
	I0729 13:12:12.217832  982934 main.go:141] libmachine: (addons-881745) Calling .GetConfigRaw
	I0729 13:12:12.218300  982934 main.go:141] libmachine: Creating machine...
	I0729 13:12:12.218314  982934 main.go:141] libmachine: (addons-881745) Calling .Create
	I0729 13:12:12.218428  982934 main.go:141] libmachine: (addons-881745) Creating KVM machine...
	I0729 13:12:12.219688  982934 main.go:141] libmachine: (addons-881745) DBG | found existing default KVM network
	I0729 13:12:12.220401  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:12.220253  982956 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0729 13:12:12.220442  982934 main.go:141] libmachine: (addons-881745) DBG | created network xml: 
	I0729 13:12:12.220455  982934 main.go:141] libmachine: (addons-881745) DBG | <network>
	I0729 13:12:12.220463  982934 main.go:141] libmachine: (addons-881745) DBG |   <name>mk-addons-881745</name>
	I0729 13:12:12.220484  982934 main.go:141] libmachine: (addons-881745) DBG |   <dns enable='no'/>
	I0729 13:12:12.220495  982934 main.go:141] libmachine: (addons-881745) DBG |   
	I0729 13:12:12.220513  982934 main.go:141] libmachine: (addons-881745) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 13:12:12.220522  982934 main.go:141] libmachine: (addons-881745) DBG |     <dhcp>
	I0729 13:12:12.220579  982934 main.go:141] libmachine: (addons-881745) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 13:12:12.220598  982934 main.go:141] libmachine: (addons-881745) DBG |     </dhcp>
	I0729 13:12:12.220605  982934 main.go:141] libmachine: (addons-881745) DBG |   </ip>
	I0729 13:12:12.220609  982934 main.go:141] libmachine: (addons-881745) DBG |   
	I0729 13:12:12.220617  982934 main.go:141] libmachine: (addons-881745) DBG | </network>
	I0729 13:12:12.220622  982934 main.go:141] libmachine: (addons-881745) DBG | 
	I0729 13:12:12.225615  982934 main.go:141] libmachine: (addons-881745) DBG | trying to create private KVM network mk-addons-881745 192.168.39.0/24...
	I0729 13:12:12.291322  982934 main.go:141] libmachine: (addons-881745) DBG | private KVM network mk-addons-881745 192.168.39.0/24 created
	I0729 13:12:12.291380  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:12.291272  982956 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 13:12:12.291400  982934 main.go:141] libmachine: (addons-881745) Setting up store path in /home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745 ...
	I0729 13:12:12.291427  982934 main.go:141] libmachine: (addons-881745) Building disk image from file:///home/jenkins/minikube-integration/19338-974764/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 13:12:12.291477  982934 main.go:141] libmachine: (addons-881745) Downloading /home/jenkins/minikube-integration/19338-974764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19338-974764/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 13:12:12.553094  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:12.552975  982956 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa...
	I0729 13:12:13.107412  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:13.107281  982956 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/addons-881745.rawdisk...
	I0729 13:12:13.107446  982934 main.go:141] libmachine: (addons-881745) DBG | Writing magic tar header
	I0729 13:12:13.107460  982934 main.go:141] libmachine: (addons-881745) DBG | Writing SSH key tar header
	I0729 13:12:13.107472  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:13.107389  982956 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745 ...
	I0729 13:12:13.107487  982934 main.go:141] libmachine: (addons-881745) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745
	I0729 13:12:13.107496  982934 main.go:141] libmachine: (addons-881745) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube/machines
	I0729 13:12:13.107505  982934 main.go:141] libmachine: (addons-881745) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 13:12:13.107511  982934 main.go:141] libmachine: (addons-881745) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764
	I0729 13:12:13.107520  982934 main.go:141] libmachine: (addons-881745) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 13:12:13.107545  982934 main.go:141] libmachine: (addons-881745) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745 (perms=drwx------)
	I0729 13:12:13.107559  982934 main.go:141] libmachine: (addons-881745) DBG | Checking permissions on dir: /home/jenkins
	I0729 13:12:13.107569  982934 main.go:141] libmachine: (addons-881745) DBG | Checking permissions on dir: /home
	I0729 13:12:13.107574  982934 main.go:141] libmachine: (addons-881745) DBG | Skipping /home - not owner
	I0729 13:12:13.107584  982934 main.go:141] libmachine: (addons-881745) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube/machines (perms=drwxr-xr-x)
	I0729 13:12:13.107592  982934 main.go:141] libmachine: (addons-881745) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube (perms=drwxr-xr-x)
	I0729 13:12:13.107608  982934 main.go:141] libmachine: (addons-881745) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764 (perms=drwxrwxr-x)
	I0729 13:12:13.107624  982934 main.go:141] libmachine: (addons-881745) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 13:12:13.107639  982934 main.go:141] libmachine: (addons-881745) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 13:12:13.107648  982934 main.go:141] libmachine: (addons-881745) Creating domain...
	I0729 13:12:13.108771  982934 main.go:141] libmachine: (addons-881745) define libvirt domain using xml: 
	I0729 13:12:13.108802  982934 main.go:141] libmachine: (addons-881745) <domain type='kvm'>
	I0729 13:12:13.108810  982934 main.go:141] libmachine: (addons-881745)   <name>addons-881745</name>
	I0729 13:12:13.108815  982934 main.go:141] libmachine: (addons-881745)   <memory unit='MiB'>4000</memory>
	I0729 13:12:13.108821  982934 main.go:141] libmachine: (addons-881745)   <vcpu>2</vcpu>
	I0729 13:12:13.108827  982934 main.go:141] libmachine: (addons-881745)   <features>
	I0729 13:12:13.108834  982934 main.go:141] libmachine: (addons-881745)     <acpi/>
	I0729 13:12:13.108840  982934 main.go:141] libmachine: (addons-881745)     <apic/>
	I0729 13:12:13.108845  982934 main.go:141] libmachine: (addons-881745)     <pae/>
	I0729 13:12:13.108854  982934 main.go:141] libmachine: (addons-881745)     
	I0729 13:12:13.108861  982934 main.go:141] libmachine: (addons-881745)   </features>
	I0729 13:12:13.108871  982934 main.go:141] libmachine: (addons-881745)   <cpu mode='host-passthrough'>
	I0729 13:12:13.108878  982934 main.go:141] libmachine: (addons-881745)   
	I0729 13:12:13.108893  982934 main.go:141] libmachine: (addons-881745)   </cpu>
	I0729 13:12:13.108900  982934 main.go:141] libmachine: (addons-881745)   <os>
	I0729 13:12:13.108905  982934 main.go:141] libmachine: (addons-881745)     <type>hvm</type>
	I0729 13:12:13.108912  982934 main.go:141] libmachine: (addons-881745)     <boot dev='cdrom'/>
	I0729 13:12:13.108917  982934 main.go:141] libmachine: (addons-881745)     <boot dev='hd'/>
	I0729 13:12:13.108935  982934 main.go:141] libmachine: (addons-881745)     <bootmenu enable='no'/>
	I0729 13:12:13.108941  982934 main.go:141] libmachine: (addons-881745)   </os>
	I0729 13:12:13.108946  982934 main.go:141] libmachine: (addons-881745)   <devices>
	I0729 13:12:13.108955  982934 main.go:141] libmachine: (addons-881745)     <disk type='file' device='cdrom'>
	I0729 13:12:13.108968  982934 main.go:141] libmachine: (addons-881745)       <source file='/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/boot2docker.iso'/>
	I0729 13:12:13.108982  982934 main.go:141] libmachine: (addons-881745)       <target dev='hdc' bus='scsi'/>
	I0729 13:12:13.108989  982934 main.go:141] libmachine: (addons-881745)       <readonly/>
	I0729 13:12:13.108998  982934 main.go:141] libmachine: (addons-881745)     </disk>
	I0729 13:12:13.109006  982934 main.go:141] libmachine: (addons-881745)     <disk type='file' device='disk'>
	I0729 13:12:13.109017  982934 main.go:141] libmachine: (addons-881745)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 13:12:13.109027  982934 main.go:141] libmachine: (addons-881745)       <source file='/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/addons-881745.rawdisk'/>
	I0729 13:12:13.109034  982934 main.go:141] libmachine: (addons-881745)       <target dev='hda' bus='virtio'/>
	I0729 13:12:13.109039  982934 main.go:141] libmachine: (addons-881745)     </disk>
	I0729 13:12:13.109046  982934 main.go:141] libmachine: (addons-881745)     <interface type='network'>
	I0729 13:12:13.109051  982934 main.go:141] libmachine: (addons-881745)       <source network='mk-addons-881745'/>
	I0729 13:12:13.109058  982934 main.go:141] libmachine: (addons-881745)       <model type='virtio'/>
	I0729 13:12:13.109063  982934 main.go:141] libmachine: (addons-881745)     </interface>
	I0729 13:12:13.109069  982934 main.go:141] libmachine: (addons-881745)     <interface type='network'>
	I0729 13:12:13.109075  982934 main.go:141] libmachine: (addons-881745)       <source network='default'/>
	I0729 13:12:13.109082  982934 main.go:141] libmachine: (addons-881745)       <model type='virtio'/>
	I0729 13:12:13.109087  982934 main.go:141] libmachine: (addons-881745)     </interface>
	I0729 13:12:13.109096  982934 main.go:141] libmachine: (addons-881745)     <serial type='pty'>
	I0729 13:12:13.109140  982934 main.go:141] libmachine: (addons-881745)       <target port='0'/>
	I0729 13:12:13.109167  982934 main.go:141] libmachine: (addons-881745)     </serial>
	I0729 13:12:13.109183  982934 main.go:141] libmachine: (addons-881745)     <console type='pty'>
	I0729 13:12:13.109196  982934 main.go:141] libmachine: (addons-881745)       <target type='serial' port='0'/>
	I0729 13:12:13.109206  982934 main.go:141] libmachine: (addons-881745)     </console>
	I0729 13:12:13.109220  982934 main.go:141] libmachine: (addons-881745)     <rng model='virtio'>
	I0729 13:12:13.109231  982934 main.go:141] libmachine: (addons-881745)       <backend model='random'>/dev/random</backend>
	I0729 13:12:13.109239  982934 main.go:141] libmachine: (addons-881745)     </rng>
	I0729 13:12:13.109248  982934 main.go:141] libmachine: (addons-881745)     
	I0729 13:12:13.109258  982934 main.go:141] libmachine: (addons-881745)     
	I0729 13:12:13.109267  982934 main.go:141] libmachine: (addons-881745)   </devices>
	I0729 13:12:13.109286  982934 main.go:141] libmachine: (addons-881745) </domain>
	I0729 13:12:13.109301  982934 main.go:141] libmachine: (addons-881745) 
	I0729 13:12:13.113640  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:04:d4:65 in network default
	I0729 13:12:13.114178  982934 main.go:141] libmachine: (addons-881745) Ensuring networks are active...
	I0729 13:12:13.114191  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:13.114796  982934 main.go:141] libmachine: (addons-881745) Ensuring network default is active
	I0729 13:12:13.115057  982934 main.go:141] libmachine: (addons-881745) Ensuring network mk-addons-881745 is active
	I0729 13:12:13.115461  982934 main.go:141] libmachine: (addons-881745) Getting domain xml...
	I0729 13:12:13.116055  982934 main.go:141] libmachine: (addons-881745) Creating domain...
	I0729 13:12:13.421832  982934 main.go:141] libmachine: (addons-881745) Waiting to get IP...
	I0729 13:12:13.422554  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:13.422923  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:13.422947  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:13.422897  982956 retry.go:31] will retry after 305.464892ms: waiting for machine to come up
	I0729 13:12:13.730394  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:13.730791  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:13.730818  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:13.730759  982956 retry.go:31] will retry after 308.538344ms: waiting for machine to come up
	I0729 13:12:14.041274  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:14.041705  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:14.041737  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:14.041651  982956 retry.go:31] will retry after 391.302482ms: waiting for machine to come up
	I0729 13:12:14.434132  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:14.434536  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:14.434564  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:14.434483  982956 retry.go:31] will retry after 382.183876ms: waiting for machine to come up
	I0729 13:12:14.818073  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:14.818460  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:14.818481  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:14.818441  982956 retry.go:31] will retry after 660.554898ms: waiting for machine to come up
	I0729 13:12:15.480166  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:15.480597  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:15.480621  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:15.480551  982956 retry.go:31] will retry after 773.489083ms: waiting for machine to come up
	I0729 13:12:16.255591  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:16.256055  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:16.256081  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:16.256000  982956 retry.go:31] will retry after 721.534344ms: waiting for machine to come up
	I0729 13:12:16.979414  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:16.979768  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:16.979802  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:16.979713  982956 retry.go:31] will retry after 1.407916984s: waiting for machine to come up
	I0729 13:12:18.389344  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:18.389777  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:18.389813  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:18.389722  982956 retry.go:31] will retry after 1.620156831s: waiting for machine to come up
	I0729 13:12:20.012437  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:20.012796  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:20.012823  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:20.012743  982956 retry.go:31] will retry after 2.309026893s: waiting for machine to come up
	I0729 13:12:22.323813  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:22.324243  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:22.324300  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:22.324213  982956 retry.go:31] will retry after 1.883250908s: waiting for machine to come up
	I0729 13:12:24.210258  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:24.210712  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:24.210740  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:24.210654  982956 retry.go:31] will retry after 3.187634723s: waiting for machine to come up
	I0729 13:12:27.399311  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:27.399726  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:27.399752  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:27.399655  982956 retry.go:31] will retry after 3.287373845s: waiting for machine to come up
	I0729 13:12:30.689681  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:30.690020  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find current IP address of domain addons-881745 in network mk-addons-881745
	I0729 13:12:30.690048  982934 main.go:141] libmachine: (addons-881745) DBG | I0729 13:12:30.689968  982956 retry.go:31] will retry after 5.570376363s: waiting for machine to come up
	I0729 13:12:36.265556  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:36.266015  982934 main.go:141] libmachine: (addons-881745) Found IP for machine: 192.168.39.103
	I0729 13:12:36.266039  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has current primary IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:36.266045  982934 main.go:141] libmachine: (addons-881745) Reserving static IP address...
	I0729 13:12:36.266413  982934 main.go:141] libmachine: (addons-881745) DBG | unable to find host DHCP lease matching {name: "addons-881745", mac: "52:54:00:c8:39:fc", ip: "192.168.39.103"} in network mk-addons-881745
	I0729 13:12:36.415984  982934 main.go:141] libmachine: (addons-881745) DBG | Getting to WaitForSSH function...
	I0729 13:12:36.416024  982934 main.go:141] libmachine: (addons-881745) Reserved static IP address: 192.168.39.103
	I0729 13:12:36.416037  982934 main.go:141] libmachine: (addons-881745) Waiting for SSH to be available...
	I0729 13:12:36.418821  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:36.419246  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:36.419283  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:36.419487  982934 main.go:141] libmachine: (addons-881745) DBG | Using SSH client type: external
	I0729 13:12:36.419511  982934 main.go:141] libmachine: (addons-881745) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa (-rw-------)
	I0729 13:12:36.419540  982934 main.go:141] libmachine: (addons-881745) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.103 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:12:36.419555  982934 main.go:141] libmachine: (addons-881745) DBG | About to run SSH command:
	I0729 13:12:36.419587  982934 main.go:141] libmachine: (addons-881745) DBG | exit 0
	I0729 13:12:36.544357  982934 main.go:141] libmachine: (addons-881745) DBG | SSH cmd err, output: <nil>: 
	I0729 13:12:36.544663  982934 main.go:141] libmachine: (addons-881745) KVM machine creation complete!
	I0729 13:12:36.545034  982934 main.go:141] libmachine: (addons-881745) Calling .GetConfigRaw
	I0729 13:12:36.561927  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:12:36.562173  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:12:36.562371  982934 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 13:12:36.562387  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:12:36.563879  982934 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 13:12:36.563894  982934 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 13:12:36.563900  982934 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 13:12:36.563905  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:12:36.566356  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:36.566717  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:36.566745  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:36.566836  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:12:36.566999  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:36.567159  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:36.567267  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:12:36.567438  982934 main.go:141] libmachine: Using SSH client type: native
	I0729 13:12:36.567655  982934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0729 13:12:36.567665  982934 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 13:12:36.675543  982934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:12:36.675567  982934 main.go:141] libmachine: Detecting the provisioner...
	I0729 13:12:36.675575  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:12:36.678303  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:36.678644  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:36.678669  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:36.678793  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:12:36.679000  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:36.679202  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:36.679384  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:12:36.679560  982934 main.go:141] libmachine: Using SSH client type: native
	I0729 13:12:36.679774  982934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0729 13:12:36.679786  982934 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 13:12:36.789093  982934 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 13:12:36.789153  982934 main.go:141] libmachine: found compatible host: buildroot
	I0729 13:12:36.789160  982934 main.go:141] libmachine: Provisioning with buildroot...
	I0729 13:12:36.789168  982934 main.go:141] libmachine: (addons-881745) Calling .GetMachineName
	I0729 13:12:36.789410  982934 buildroot.go:166] provisioning hostname "addons-881745"
	I0729 13:12:36.789436  982934 main.go:141] libmachine: (addons-881745) Calling .GetMachineName
	I0729 13:12:36.789676  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:12:36.792173  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:36.792523  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:36.792551  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:36.792740  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:12:36.792922  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:36.793102  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:36.793225  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:12:36.793389  982934 main.go:141] libmachine: Using SSH client type: native
	I0729 13:12:36.793581  982934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0729 13:12:36.793594  982934 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-881745 && echo "addons-881745" | sudo tee /etc/hostname
	I0729 13:12:36.914458  982934 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-881745
	
	I0729 13:12:36.914490  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:12:36.917317  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:36.917671  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:36.917698  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:36.917901  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:12:36.918099  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:36.918287  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:36.918396  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:12:36.918537  982934 main.go:141] libmachine: Using SSH client type: native
	I0729 13:12:36.918750  982934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0729 13:12:36.918768  982934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-881745' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-881745/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-881745' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:12:37.032814  982934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:12:37.032857  982934 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 13:12:37.032877  982934 buildroot.go:174] setting up certificates
	I0729 13:12:37.032890  982934 provision.go:84] configureAuth start
	I0729 13:12:37.032899  982934 main.go:141] libmachine: (addons-881745) Calling .GetMachineName
	I0729 13:12:37.033292  982934 main.go:141] libmachine: (addons-881745) Calling .GetIP
	I0729 13:12:37.035722  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.036065  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:37.036101  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.036206  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:12:37.038254  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.038498  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:37.038518  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.038643  982934 provision.go:143] copyHostCerts
	I0729 13:12:37.038704  982934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 13:12:37.038850  982934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 13:12:37.038917  982934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 13:12:37.038970  982934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.addons-881745 san=[127.0.0.1 192.168.39.103 addons-881745 localhost minikube]
	I0729 13:12:37.239500  982934 provision.go:177] copyRemoteCerts
	I0729 13:12:37.239598  982934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:12:37.239649  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:12:37.242198  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.242537  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:37.242577  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.242715  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:12:37.242944  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:37.243107  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:12:37.243255  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:12:37.328164  982934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:12:37.352836  982934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 13:12:37.375487  982934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 13:12:37.398432  982934 provision.go:87] duration metric: took 365.524166ms to configureAuth
	I0729 13:12:37.398468  982934 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:12:37.398654  982934 config.go:182] Loaded profile config "addons-881745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:12:37.398750  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:12:37.401383  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.401715  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:37.401744  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.401922  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:12:37.402126  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:37.402292  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:37.402421  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:12:37.402567  982934 main.go:141] libmachine: Using SSH client type: native
	I0729 13:12:37.402749  982934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0729 13:12:37.402775  982934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:12:37.895943  982934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:12:37.895973  982934 main.go:141] libmachine: Checking connection to Docker...
	I0729 13:12:37.895981  982934 main.go:141] libmachine: (addons-881745) Calling .GetURL
	I0729 13:12:37.897390  982934 main.go:141] libmachine: (addons-881745) DBG | Using libvirt version 6000000
	I0729 13:12:37.899335  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.899629  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:37.899657  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.899793  982934 main.go:141] libmachine: Docker is up and running!
	I0729 13:12:37.899807  982934 main.go:141] libmachine: Reticulating splines...
	I0729 13:12:37.899816  982934 client.go:171] duration metric: took 25.937296783s to LocalClient.Create
	I0729 13:12:37.899851  982934 start.go:167] duration metric: took 25.937370991s to libmachine.API.Create "addons-881745"
	I0729 13:12:37.899865  982934 start.go:293] postStartSetup for "addons-881745" (driver="kvm2")
	I0729 13:12:37.899881  982934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:12:37.899904  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:12:37.900136  982934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:12:37.900161  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:12:37.902176  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.902494  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:37.902523  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.902641  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:12:37.902833  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:37.902979  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:12:37.903098  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:12:37.987037  982934 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:12:37.991158  982934 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:12:37.991190  982934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 13:12:37.991277  982934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 13:12:37.991303  982934 start.go:296] duration metric: took 91.428199ms for postStartSetup
	I0729 13:12:37.991350  982934 main.go:141] libmachine: (addons-881745) Calling .GetConfigRaw
	I0729 13:12:37.991957  982934 main.go:141] libmachine: (addons-881745) Calling .GetIP
	I0729 13:12:37.994604  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.994919  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:37.994941  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.995219  982934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/config.json ...
	I0729 13:12:37.995418  982934 start.go:128] duration metric: took 26.049880989s to createHost
	I0729 13:12:37.995443  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:12:37.997536  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.997888  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:37.997917  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:37.998044  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:12:37.998221  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:37.998372  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:37.998507  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:12:37.998657  982934 main.go:141] libmachine: Using SSH client type: native
	I0729 13:12:37.998818  982934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0729 13:12:37.998830  982934 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:12:38.104876  982934 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722258758.085146130
	
	I0729 13:12:38.104902  982934 fix.go:216] guest clock: 1722258758.085146130
	I0729 13:12:38.104909  982934 fix.go:229] Guest: 2024-07-29 13:12:38.08514613 +0000 UTC Remote: 2024-07-29 13:12:37.995430948 +0000 UTC m=+26.148363706 (delta=89.715182ms)
	I0729 13:12:38.104951  982934 fix.go:200] guest clock delta is within tolerance: 89.715182ms
	I0729 13:12:38.104957  982934 start.go:83] releasing machines lock for "addons-881745", held for 26.159490127s
	I0729 13:12:38.104980  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:12:38.105250  982934 main.go:141] libmachine: (addons-881745) Calling .GetIP
	I0729 13:12:38.107573  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:38.107881  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:38.107913  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:38.108100  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:12:38.108623  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:12:38.108825  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:12:38.108932  982934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:12:38.108990  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:12:38.109046  982934 ssh_runner.go:195] Run: cat /version.json
	I0729 13:12:38.109070  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:12:38.111554  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:38.111677  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:38.111910  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:38.111934  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:38.112085  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:12:38.112092  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:38.112117  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:38.112225  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:12:38.112311  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:38.112379  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:12:38.112466  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:12:38.112528  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:12:38.112592  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:12:38.112631  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:12:38.189630  982934 ssh_runner.go:195] Run: systemctl --version
	I0729 13:12:38.214003  982934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:12:38.377207  982934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:12:38.383106  982934 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:12:38.383172  982934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:12:38.399960  982934 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:12:38.399992  982934 start.go:495] detecting cgroup driver to use...
	I0729 13:12:38.400068  982934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:12:38.417578  982934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:12:38.430667  982934 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:12:38.430735  982934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:12:38.443225  982934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:12:38.455891  982934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:12:38.566179  982934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:12:38.719619  982934 docker.go:233] disabling docker service ...
	I0729 13:12:38.719688  982934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:12:38.734511  982934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:12:38.747992  982934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:12:38.886768  982934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:12:39.004857  982934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:12:39.018417  982934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:12:39.036056  982934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:12:39.036121  982934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:12:39.046634  982934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:12:39.046697  982934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:12:39.056495  982934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:12:39.066282  982934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:12:39.076070  982934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:12:39.086092  982934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:12:39.095697  982934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:12:39.111898  982934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:12:39.121485  982934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:12:39.130702  982934 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:12:39.130756  982934 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:12:39.143530  982934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:12:39.152960  982934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:12:39.270597  982934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:12:39.402164  982934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:12:39.402286  982934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:12:39.406943  982934 start.go:563] Will wait 60s for crictl version
	I0729 13:12:39.407017  982934 ssh_runner.go:195] Run: which crictl
	I0729 13:12:39.410615  982934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:12:39.449246  982934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:12:39.449369  982934 ssh_runner.go:195] Run: crio --version
	I0729 13:12:39.480692  982934 ssh_runner.go:195] Run: crio --version
	I0729 13:12:39.513261  982934 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:12:39.514529  982934 main.go:141] libmachine: (addons-881745) Calling .GetIP
	I0729 13:12:39.517235  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:39.517567  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:12:39.517595  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:12:39.517832  982934 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 13:12:39.521890  982934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:12:39.533874  982934 kubeadm.go:883] updating cluster {Name:addons-881745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-881745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:12:39.533992  982934 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:12:39.534047  982934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:12:39.569672  982934 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 13:12:39.569740  982934 ssh_runner.go:195] Run: which lz4
	I0729 13:12:39.573850  982934 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 13:12:39.578406  982934 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:12:39.578449  982934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 13:12:40.896125  982934 crio.go:462] duration metric: took 1.322306951s to copy over tarball
	I0729 13:12:40.896207  982934 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:12:43.067072  982934 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.170831369s)
	I0729 13:12:43.067107  982934 crio.go:469] duration metric: took 2.170947633s to extract the tarball
	I0729 13:12:43.067116  982934 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:12:43.106903  982934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:12:43.153274  982934 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:12:43.153302  982934 cache_images.go:84] Images are preloaded, skipping loading
	I0729 13:12:43.153314  982934 kubeadm.go:934] updating node { 192.168.39.103 8443 v1.30.3 crio true true} ...
	I0729 13:12:43.153439  982934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-881745 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.103
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-881745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:12:43.153524  982934 ssh_runner.go:195] Run: crio config
	I0729 13:12:43.207924  982934 cni.go:84] Creating CNI manager for ""
	I0729 13:12:43.207951  982934 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:12:43.207964  982934 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:12:43.207989  982934 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.103 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-881745 NodeName:addons-881745 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.103"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.103 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:12:43.208174  982934 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.103
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-881745"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.103
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.103"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:12:43.208253  982934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 13:12:43.218326  982934 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:12:43.218397  982934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 13:12:43.227846  982934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0729 13:12:43.244031  982934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:12:43.261201  982934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0729 13:12:43.278636  982934 ssh_runner.go:195] Run: grep 192.168.39.103	control-plane.minikube.internal$ /etc/hosts
	I0729 13:12:43.282381  982934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.103	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:12:43.294256  982934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:12:43.433143  982934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:12:43.449659  982934 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745 for IP: 192.168.39.103
	I0729 13:12:43.449681  982934 certs.go:194] generating shared ca certs ...
	I0729 13:12:43.449711  982934 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:12:43.449861  982934 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 13:12:43.707812  982934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt ...
	I0729 13:12:43.707847  982934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt: {Name:mk1f3b9879e632d5d36f972daaa00444d9485e92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:12:43.708029  982934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key ...
	I0729 13:12:43.708045  982934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key: {Name:mkc81de37ebd536a1b656d14ad97b60baeff9d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:12:43.708159  982934 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 13:12:44.004872  982934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt ...
	I0729 13:12:44.004906  982934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt: {Name:mkf25694e23f33f75a76ea593b701a738621b1e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:12:44.005098  982934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key ...
	I0729 13:12:44.005115  982934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key: {Name:mkf146f5ce823ad2153a06ce2b555e53cb297941 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:12:44.005216  982934 certs.go:256] generating profile certs ...
	I0729 13:12:44.005294  982934 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.key
	I0729 13:12:44.005312  982934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt with IP's: []
	I0729 13:12:44.082140  982934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt ...
	I0729 13:12:44.082174  982934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: {Name:mka4cc3b622a5d75fae1730c6118a6f1db1caa3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:12:44.082322  982934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.key ...
	I0729 13:12:44.082333  982934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.key: {Name:mkb686f7e93a16ce26e027e16c3e1502a18673e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:12:44.082401  982934 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/apiserver.key.0c3e969b
	I0729 13:12:44.082419  982934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/apiserver.crt.0c3e969b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.103]
	I0729 13:12:44.372075  982934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/apiserver.crt.0c3e969b ...
	I0729 13:12:44.372104  982934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/apiserver.crt.0c3e969b: {Name:mk34bd4b76752e3494785ed6b9787f052f0b605d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:12:44.372256  982934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/apiserver.key.0c3e969b ...
	I0729 13:12:44.372270  982934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/apiserver.key.0c3e969b: {Name:mk2e60c9c8fd7053dfd086bf083e7145cca45668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:12:44.372339  982934 certs.go:381] copying /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/apiserver.crt.0c3e969b -> /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/apiserver.crt
	I0729 13:12:44.372442  982934 certs.go:385] copying /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/apiserver.key.0c3e969b -> /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/apiserver.key
	I0729 13:12:44.372486  982934 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/proxy-client.key
	I0729 13:12:44.372504  982934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/proxy-client.crt with IP's: []
	I0729 13:12:44.448296  982934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/proxy-client.crt ...
	I0729 13:12:44.448326  982934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/proxy-client.crt: {Name:mk8304498a5a21b32735469d642d814ee1ca1798 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:12:44.448491  982934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/proxy-client.key ...
	I0729 13:12:44.448504  982934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/proxy-client.key: {Name:mk7f60229bf97d46fb4c2d73ae61c662f3f4ee5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:12:44.448673  982934 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:12:44.448708  982934 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:12:44.448731  982934 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:12:44.448755  982934 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 13:12:44.449430  982934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:12:44.478702  982934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:12:44.501678  982934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:12:44.524992  982934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 13:12:44.547538  982934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 13:12:44.570360  982934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 13:12:44.592721  982934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:12:44.614899  982934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 13:12:44.638123  982934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:12:44.660642  982934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:12:44.675935  982934 ssh_runner.go:195] Run: openssl version
	I0729 13:12:44.681508  982934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:12:44.691356  982934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:12:44.695599  982934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:12:44.695655  982934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:12:44.701355  982934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:12:44.711855  982934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:12:44.715695  982934 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 13:12:44.715749  982934 kubeadm.go:392] StartCluster: {Name:addons-881745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-881745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:12:44.715840  982934 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:12:44.715897  982934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:12:44.750470  982934 cri.go:89] found id: ""
	I0729 13:12:44.750556  982934 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:12:44.760709  982934 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:12:44.769941  982934 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:12:44.779519  982934 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:12:44.779537  982934 kubeadm.go:157] found existing configuration files:
	
	I0729 13:12:44.779576  982934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:12:44.788536  982934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:12:44.788590  982934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:12:44.797726  982934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:12:44.806209  982934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:12:44.806259  982934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:12:44.815032  982934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:12:44.823490  982934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:12:44.823538  982934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:12:44.832451  982934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:12:44.841179  982934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:12:44.841226  982934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:12:44.849802  982934 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:12:44.905770  982934 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 13:12:44.905908  982934 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:12:45.041407  982934 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:12:45.041567  982934 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:12:45.041713  982934 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:12:45.270727  982934 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:12:45.396728  982934 out.go:204]   - Generating certificates and keys ...
	I0729 13:12:45.396871  982934 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:12:45.396971  982934 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:12:45.397066  982934 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 13:12:45.493817  982934 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 13:12:45.712513  982934 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 13:12:45.795441  982934 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 13:12:46.020295  982934 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 13:12:46.020465  982934 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-881745 localhost] and IPs [192.168.39.103 127.0.0.1 ::1]
	I0729 13:12:46.315872  982934 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 13:12:46.316021  982934 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-881745 localhost] and IPs [192.168.39.103 127.0.0.1 ::1]
	I0729 13:12:46.389329  982934 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 13:12:46.463885  982934 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 13:12:46.579876  982934 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 13:12:46.579981  982934 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:12:46.694566  982934 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:12:46.930327  982934 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 13:12:47.032624  982934 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:12:47.118168  982934 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:12:47.403080  982934 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:12:47.403630  982934 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:12:47.405981  982934 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:12:47.407483  982934 out.go:204]   - Booting up control plane ...
	I0729 13:12:47.407562  982934 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:12:47.407652  982934 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:12:47.408154  982934 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:12:47.427983  982934 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:12:47.430812  982934 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:12:47.430868  982934 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:12:47.561774  982934 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 13:12:47.561889  982934 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 13:12:48.062427  982934 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.964177ms
	I0729 13:12:48.062512  982934 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 13:12:52.564225  982934 kubeadm.go:310] [api-check] The API server is healthy after 4.501999443s
	I0729 13:12:52.574752  982934 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 13:12:52.595522  982934 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 13:12:52.627486  982934 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 13:12:52.627705  982934 kubeadm.go:310] [mark-control-plane] Marking the node addons-881745 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 13:12:52.641402  982934 kubeadm.go:310] [bootstrap-token] Using token: ot71em.9uxc3f9qsdcuq9f5
	I0729 13:12:52.642822  982934 out.go:204]   - Configuring RBAC rules ...
	I0729 13:12:52.642944  982934 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 13:12:52.646755  982934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 13:12:52.652949  982934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 13:12:52.657939  982934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 13:12:52.663016  982934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 13:12:52.665859  982934 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 13:12:52.968601  982934 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 13:12:53.400501  982934 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 13:12:53.970229  982934 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 13:12:53.970751  982934 kubeadm.go:310] 
	I0729 13:12:53.970823  982934 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 13:12:53.970840  982934 kubeadm.go:310] 
	I0729 13:12:53.970926  982934 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 13:12:53.970935  982934 kubeadm.go:310] 
	I0729 13:12:53.971001  982934 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 13:12:53.971456  982934 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 13:12:53.971519  982934 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 13:12:53.971527  982934 kubeadm.go:310] 
	I0729 13:12:53.971595  982934 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 13:12:53.971604  982934 kubeadm.go:310] 
	I0729 13:12:53.971642  982934 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 13:12:53.971649  982934 kubeadm.go:310] 
	I0729 13:12:53.971690  982934 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 13:12:53.971760  982934 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 13:12:53.971825  982934 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 13:12:53.971831  982934 kubeadm.go:310] 
	I0729 13:12:53.971910  982934 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 13:12:53.971978  982934 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 13:12:53.971984  982934 kubeadm.go:310] 
	I0729 13:12:53.972119  982934 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ot71em.9uxc3f9qsdcuq9f5 \
	I0729 13:12:53.972271  982934 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 \
	I0729 13:12:53.972304  982934 kubeadm.go:310] 	--control-plane 
	I0729 13:12:53.972314  982934 kubeadm.go:310] 
	I0729 13:12:53.972439  982934 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 13:12:53.972450  982934 kubeadm.go:310] 
	I0729 13:12:53.972561  982934 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ot71em.9uxc3f9qsdcuq9f5 \
	I0729 13:12:53.972697  982934 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 
	I0729 13:12:53.973316  982934 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:12:53.973471  982934 cni.go:84] Creating CNI manager for ""
	I0729 13:12:53.973489  982934 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:12:53.975038  982934 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 13:12:53.976167  982934 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 13:12:53.987551  982934 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 13:12:54.006767  982934 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 13:12:54.006877  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-881745 minikube.k8s.io/updated_at=2024_07_29T13_12_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411 minikube.k8s.io/name=addons-881745 minikube.k8s.io/primary=true
	I0729 13:12:54.006877  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:12:54.040528  982934 ops.go:34] apiserver oom_adj: -16
	I0729 13:12:54.106539  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:12:54.607265  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:12:55.106823  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:12:55.607114  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:12:56.106927  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:12:56.606624  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:12:57.107513  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:12:57.607394  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:12:58.106969  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:12:58.606846  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:12:59.107519  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:12:59.606601  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:00.106730  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:00.607490  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:01.107013  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:01.606588  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:02.107516  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:02.607583  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:03.107283  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:03.606925  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:04.106767  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:04.606846  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:05.106655  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:05.606952  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:06.107272  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:06.606704  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:07.107284  982934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:13:07.202825  982934 kubeadm.go:1113] duration metric: took 13.196015361s to wait for elevateKubeSystemPrivileges
	I0729 13:13:07.202872  982934 kubeadm.go:394] duration metric: took 22.487128484s to StartCluster
	I0729 13:13:07.202902  982934 settings.go:142] acquiring lock: {Name:mke61e73d7bb1a5bd9c2f4c9e9bba0a07b199ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:13:07.203068  982934 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 13:13:07.203547  982934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:13:07.203757  982934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 13:13:07.203792  982934 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:13:07.203881  982934 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0729 13:13:07.204005  982934 addons.go:69] Setting yakd=true in profile "addons-881745"
	I0729 13:13:07.204040  982934 config.go:182] Loaded profile config "addons-881745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:13:07.204051  982934 addons.go:234] Setting addon yakd=true in "addons-881745"
	I0729 13:13:07.204046  982934 addons.go:69] Setting inspektor-gadget=true in profile "addons-881745"
	I0729 13:13:07.204058  982934 addons.go:69] Setting gcp-auth=true in profile "addons-881745"
	I0729 13:13:07.204089  982934 addons.go:234] Setting addon inspektor-gadget=true in "addons-881745"
	I0729 13:13:07.204091  982934 addons.go:69] Setting ingress=true in profile "addons-881745"
	I0729 13:13:07.204085  982934 addons.go:69] Setting storage-provisioner=true in profile "addons-881745"
	I0729 13:13:07.204099  982934 mustload.go:65] Loading cluster: addons-881745
	I0729 13:13:07.204111  982934 addons.go:69] Setting volcano=true in profile "addons-881745"
	I0729 13:13:07.204120  982934 addons.go:234] Setting addon storage-provisioner=true in "addons-881745"
	I0729 13:13:07.204118  982934 addons.go:69] Setting helm-tiller=true in profile "addons-881745"
	I0729 13:13:07.204130  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.204133  982934 addons.go:234] Setting addon volcano=true in "addons-881745"
	I0729 13:13:07.204140  982934 addons.go:234] Setting addon helm-tiller=true in "addons-881745"
	I0729 13:13:07.204148  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.204152  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.204168  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.204238  982934 addons.go:69] Setting metrics-server=true in profile "addons-881745"
	I0729 13:13:07.204270  982934 addons.go:234] Setting addon metrics-server=true in "addons-881745"
	I0729 13:13:07.204290  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.204321  982934 config.go:182] Loaded profile config "addons-881745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:13:07.204629  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.204662  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.204678  982934 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-881745"
	I0729 13:13:07.204706  982934 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-881745"
	I0729 13:13:07.204720  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.204735  982934 addons.go:69] Setting registry=true in profile "addons-881745"
	I0729 13:13:07.204739  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.204751  982934 addons.go:234] Setting addon registry=true in "addons-881745"
	I0729 13:13:07.204774  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.204780  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.204794  982934 addons.go:69] Setting default-storageclass=true in profile "addons-881745"
	I0729 13:13:07.204809  982934 addons.go:69] Setting ingress-dns=true in profile "addons-881745"
	I0729 13:13:07.204825  982934 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-881745"
	I0729 13:13:07.204797  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.204840  982934 addons.go:234] Setting addon ingress-dns=true in "addons-881745"
	I0729 13:13:07.204726  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.204872  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.204112  982934 addons.go:234] Setting addon ingress=true in "addons-881745"
	I0729 13:13:07.204094  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.204101  982934 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-881745"
	I0729 13:13:07.205110  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.205128  982934 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-881745"
	I0729 13:13:07.204678  982934 addons.go:69] Setting volumesnapshots=true in profile "addons-881745"
	I0729 13:13:07.205141  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.205152  982934 addons.go:234] Setting addon volumesnapshots=true in "addons-881745"
	I0729 13:13:07.204661  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.205179  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.205182  982934 addons.go:69] Setting cloud-spanner=true in profile "addons-881745"
	I0729 13:13:07.205203  982934 addons.go:234] Setting addon cloud-spanner=true in "addons-881745"
	I0729 13:13:07.205211  982934 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-881745"
	I0729 13:13:07.205249  982934 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-881745"
	I0729 13:13:07.205271  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.204661  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.205304  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.205342  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.205292  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.205409  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.205433  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.205444  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.205455  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.205474  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.205476  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.205497  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.205610  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.205617  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.205633  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.205643  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.205768  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.205890  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.205909  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.205916  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.206018  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.206053  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.206102  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.206128  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.209063  982934 out.go:177] * Verifying Kubernetes components...
	I0729 13:13:07.211091  982934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:13:07.225432  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40611
	I0729 13:13:07.225528  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40797
	I0729 13:13:07.225590  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43673
	I0729 13:13:07.225634  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34553
	I0729 13:13:07.228797  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.228853  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.240402  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.240461  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.240515  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.241083  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.241104  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.241254  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.241265  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.241383  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.241394  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.242232  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.242279  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.242287  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.243078  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.243081  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.243125  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.243155  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.243369  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.244981  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.245535  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.245556  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.245983  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.246043  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.246425  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.246457  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.247168  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.247193  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.253089  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33761
	I0729 13:13:07.253639  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.254220  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.254238  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.254621  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.255210  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.255235  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.266769  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38599
	I0729 13:13:07.267329  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34741
	I0729 13:13:07.267852  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.268486  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.268504  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.270328  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.270764  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.270869  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43167
	I0729 13:13:07.271211  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.271689  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.271710  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.271843  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.271862  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.272221  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.272401  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41599
	I0729 13:13:07.272968  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.273015  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.273054  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.273270  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.273321  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.273961  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.273986  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.274533  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.275070  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.275097  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.275291  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39557
	I0729 13:13:07.275843  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.276445  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.276468  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.276821  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.276989  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.278292  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.278488  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.279167  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.282511  982934 addons.go:234] Setting addon default-storageclass=true in "addons-881745"
	I0729 13:13:07.282560  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.282943  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.282986  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.283217  982934 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0729 13:13:07.284835  982934 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 13:13:07.284854  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0729 13:13:07.284873  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.285818  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41021
	I0729 13:13:07.286336  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40181
	I0729 13:13:07.286780  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.287263  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.287283  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.287652  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.287690  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36265
	I0729 13:13:07.287926  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.288220  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.288245  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.288774  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.288801  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.288862  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.288895  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.289492  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.289510  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.289575  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.289589  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.289615  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.289784  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.289837  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.289944  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.289946  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.290028  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:07.291031  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.291071  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.291695  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37685
	I0729 13:13:07.291835  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40865
	I0729 13:13:07.292179  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.292674  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.292693  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.292768  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.293160  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.293371  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.294350  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.294374  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.294979  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.295033  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.295232  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.295734  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37395
	I0729 13:13:07.296246  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.296510  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.297298  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.297345  982934 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0729 13:13:07.298143  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.298160  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.298371  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45637
	I0729 13:13:07.298544  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.298880  982934 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0729 13:13:07.298901  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0729 13:13:07.298919  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.298949  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.299188  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.301083  982934 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0729 13:13:07.301512  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.302163  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.302180  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.302487  982934 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0729 13:13:07.302507  982934 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0729 13:13:07.302527  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.302540  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42207
	I0729 13:13:07.302600  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.302630  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.302682  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43325
	I0729 13:13:07.303051  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.303071  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.303093  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.303317  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.303526  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.303691  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.303705  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.303813  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.304385  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.304404  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.304458  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.304672  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.305033  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.305690  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.305733  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.307221  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.307251  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.307272  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.307299  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.307361  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43437
	I0729 13:13:07.307517  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.307696  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.307705  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.308076  982934 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-881745"
	I0729 13:13:07.308128  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:07.308540  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.308587  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.308905  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.308922  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.308983  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:07.309248  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.309425  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.309564  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:07.310231  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36751
	I0729 13:13:07.310633  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.311135  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.311152  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.311505  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.312102  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.312139  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.312837  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.313334  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.313492  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.313533  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.315281  982934 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0729 13:13:07.316437  982934 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 13:13:07.316466  982934 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 13:13:07.316487  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.321035  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39027
	I0729 13:13:07.321901  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.323498  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.324030  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.324061  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.324448  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.324472  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.324545  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.324748  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.324928  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.325101  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:07.325467  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.325856  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.326844  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41353
	I0729 13:13:07.327259  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.327746  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.327769  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.328046  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.328295  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.329091  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.330098  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.331016  982934 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0729 13:13:07.331130  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41397
	I0729 13:13:07.331664  982934 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0729 13:13:07.331771  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.332736  982934 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 13:13:07.332766  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0729 13:13:07.332792  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.332736  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.332852  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.333186  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.334190  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.334340  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.334641  982934 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 13:13:07.335768  982934 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 13:13:07.336229  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.336265  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43011
	I0729 13:13:07.336913  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.336944  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.337116  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.337223  982934 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 13:13:07.337251  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0729 13:13:07.337273  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.337312  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.337487  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.337731  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:07.338699  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38219
	I0729 13:13:07.340879  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.341048  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.341081  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.341259  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.341469  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.341631  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.341803  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:07.341817  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38737
	I0729 13:13:07.342351  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.342759  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.342792  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.343109  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.343300  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.344626  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.346322  982934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0729 13:13:07.347855  982934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0729 13:13:07.349165  982934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0729 13:13:07.350448  982934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0729 13:13:07.350835  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36757
	I0729 13:13:07.351034  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37375
	I0729 13:13:07.351279  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.351440  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.352042  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.352063  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.352162  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.352178  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.352489  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.352520  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.352697  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.352724  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.353058  982934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0729 13:13:07.354476  982934 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0729 13:13:07.355606  982934 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0729 13:13:07.355806  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36741
	I0729 13:13:07.355813  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.355881  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.356380  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.356920  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.356955  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.357371  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.357607  982934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0729 13:13:07.357608  982934 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0729 13:13:07.357670  982934 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0729 13:13:07.357985  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:07.358026  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:07.358890  982934 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0729 13:13:07.358911  982934 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0729 13:13:07.358931  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.358984  982934 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0729 13:13:07.359000  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0729 13:13:07.359019  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.359085  982934 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0729 13:13:07.359093  982934 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0729 13:13:07.359107  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.361426  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.361476  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.362068  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.362137  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.363029  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.363048  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.363266  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.363428  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.363677  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.363683  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.363708  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.363784  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.363941  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.364077  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.364284  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:07.364869  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.365284  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.366662  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44811
	I0729 13:13:07.367102  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.367117  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.368155  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.368656  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.368662  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0729 13:13:07.368996  982934 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:13:07.369002  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.369355  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:07.369368  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:07.369505  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.369524  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.369573  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.369583  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.369608  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.369616  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.369643  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.369821  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.369896  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.369937  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:07.369945  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:07.369953  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:07.369959  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:07.370144  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:07.370170  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:07.370178  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	W0729 13:13:07.370256  982934 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0729 13:13:07.370450  982934 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:13:07.370464  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 13:13:07.370481  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.370551  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.370595  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.370849  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.370866  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.370931  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.370965  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.371550  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:07.371682  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.371949  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:07.372107  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.373819  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.373863  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.374085  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.374140  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.374261  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.374278  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.374411  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.374588  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.374751  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.374935  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:07.375806  982934 out.go:177]   - Using image docker.io/registry:2.8.3
	I0729 13:13:07.375859  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.377394  982934 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0729 13:13:07.379019  982934 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0729 13:13:07.379092  982934 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0729 13:13:07.379109  982934 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0729 13:13:07.379125  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.380589  982934 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0729 13:13:07.380612  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0729 13:13:07.380628  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.382066  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.382413  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.382438  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.382595  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.382798  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.382992  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.383153  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:07.384296  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.384800  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.384824  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.385026  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.385228  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.385398  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.385563  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	W0729 13:13:07.386060  982934 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48332->192.168.39.103:22: read: connection reset by peer
	I0729 13:13:07.386091  982934 retry.go:31] will retry after 133.910263ms: ssh: handshake failed: read tcp 192.168.39.1:48332->192.168.39.103:22: read: connection reset by peer
	I0729 13:13:07.386985  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45583
	I0729 13:13:07.387416  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.387821  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.387845  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.388195  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.388384  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.388536  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34661
	I0729 13:13:07.389117  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:07.389612  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:07.389631  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:07.389697  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.389953  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:07.390176  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:07.391464  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:07.391482  982934 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0729 13:13:07.391743  982934 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 13:13:07.391761  982934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 13:13:07.391779  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.393746  982934 out.go:177]   - Using image docker.io/busybox:stable
	I0729 13:13:07.394330  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.394786  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.394808  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.394980  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.395137  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.395199  982934 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 13:13:07.395216  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0729 13:13:07.395237  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:07.395263  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.395756  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:07.397995  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.398343  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:07.398366  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:07.398516  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:07.398699  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:07.398857  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:07.398999  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	W0729 13:13:07.521004  982934 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48364->192.168.39.103:22: read: connection reset by peer
	I0729 13:13:07.521035  982934 retry.go:31] will retry after 466.463671ms: ssh: handshake failed: read tcp 192.168.39.1:48364->192.168.39.103:22: read: connection reset by peer
	I0729 13:13:07.664821  982934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:13:07.664838  982934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 13:13:07.745648  982934 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0729 13:13:07.745678  982934 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0729 13:13:07.792889  982934 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 13:13:07.792917  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0729 13:13:07.830723  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0729 13:13:07.833402  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 13:13:07.846476  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 13:13:07.865509  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:13:07.875346  982934 node_ready.go:35] waiting up to 6m0s for node "addons-881745" to be "Ready" ...
	I0729 13:13:07.878373  982934 node_ready.go:49] node "addons-881745" has status "Ready":"True"
	I0729 13:13:07.878396  982934 node_ready.go:38] duration metric: took 3.018136ms for node "addons-881745" to be "Ready" ...
	I0729 13:13:07.878407  982934 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:13:07.885206  982934 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bdkkm" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:07.924362  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 13:13:07.926015  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 13:13:07.930387  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 13:13:07.936884  982934 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0729 13:13:07.936900  982934 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0729 13:13:07.950284  982934 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0729 13:13:07.950305  982934 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0729 13:13:07.992465  982934 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0729 13:13:07.992502  982934 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0729 13:13:08.018598  982934 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0729 13:13:08.018630  982934 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0729 13:13:08.027723  982934 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 13:13:08.027748  982934 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 13:13:08.045911  982934 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0729 13:13:08.045938  982934 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0729 13:13:08.101638  982934 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0729 13:13:08.101671  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0729 13:13:08.149228  982934 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 13:13:08.149258  982934 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0729 13:13:08.171543  982934 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:13:08.171567  982934 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 13:13:08.180882  982934 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0729 13:13:08.180908  982934 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0729 13:13:08.228089  982934 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0729 13:13:08.228119  982934 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0729 13:13:08.240975  982934 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0729 13:13:08.241002  982934 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0729 13:13:08.352491  982934 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0729 13:13:08.352518  982934 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0729 13:13:08.354120  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 13:13:08.365987  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0729 13:13:08.406936  982934 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0729 13:13:08.406971  982934 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0729 13:13:08.407537  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 13:13:08.497281  982934 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0729 13:13:08.497313  982934 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0729 13:13:08.500368  982934 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0729 13:13:08.500386  982934 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0729 13:13:08.591103  982934 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0729 13:13:08.591141  982934 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0729 13:13:08.659214  982934 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0729 13:13:08.659242  982934 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0729 13:13:08.676940  982934 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0729 13:13:08.676968  982934 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0729 13:13:08.728110  982934 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0729 13:13:08.728134  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0729 13:13:08.887605  982934 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0729 13:13:08.887628  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0729 13:13:08.900489  982934 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0729 13:13:08.900535  982934 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0729 13:13:08.938882  982934 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0729 13:13:08.938914  982934 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0729 13:13:09.109801  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0729 13:13:09.137287  982934 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0729 13:13:09.137314  982934 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0729 13:13:09.170943  982934 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0729 13:13:09.170970  982934 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0729 13:13:09.177342  982934 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 13:13:09.177364  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0729 13:13:09.457622  982934 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0729 13:13:09.457649  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0729 13:13:09.487008  982934 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0729 13:13:09.487040  982934 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0729 13:13:09.504900  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 13:13:09.676868  982934 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0729 13:13:09.676906  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0729 13:13:09.837675  982934 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 13:13:09.837708  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0729 13:13:09.898201  982934 pod_ready.go:102] pod "coredns-7db6d8ff4d-bdkkm" in "kube-system" namespace has status "Ready":"False"
	I0729 13:13:09.994214  982934 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.329274493s)
	I0729 13:13:09.994247  982934 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 13:13:10.135793  982934 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 13:13:10.135823  982934 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0729 13:13:10.173833  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 13:13:10.432856  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 13:13:10.498456  982934 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-881745" context rescaled to 1 replicas
	I0729 13:13:10.846703  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.015935057s)
	I0729 13:13:10.846771  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:10.846785  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:10.847227  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:10.847235  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:10.847252  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:10.847281  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:10.847298  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:10.847574  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:10.847596  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:12.000846  982934 pod_ready.go:92] pod "coredns-7db6d8ff4d-bdkkm" in "kube-system" namespace has status "Ready":"True"
	I0729 13:13:12.000883  982934 pod_ready.go:81] duration metric: took 4.115651178s for pod "coredns-7db6d8ff4d-bdkkm" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:12.000898  982934 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nnxtv" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:12.066890  982934 pod_ready.go:92] pod "coredns-7db6d8ff4d-nnxtv" in "kube-system" namespace has status "Ready":"True"
	I0729 13:13:12.066918  982934 pod_ready.go:81] duration metric: took 66.012864ms for pod "coredns-7db6d8ff4d-nnxtv" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:12.066929  982934 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-881745" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:12.139792  982934 pod_ready.go:92] pod "etcd-addons-881745" in "kube-system" namespace has status "Ready":"True"
	I0729 13:13:12.139815  982934 pod_ready.go:81] duration metric: took 72.880094ms for pod "etcd-addons-881745" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:12.139826  982934 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-881745" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:12.197077  982934 pod_ready.go:92] pod "kube-apiserver-addons-881745" in "kube-system" namespace has status "Ready":"True"
	I0729 13:13:12.197100  982934 pod_ready.go:81] duration metric: took 57.268132ms for pod "kube-apiserver-addons-881745" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:12.197111  982934 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-881745" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:12.265734  982934 pod_ready.go:92] pod "kube-controller-manager-addons-881745" in "kube-system" namespace has status "Ready":"True"
	I0729 13:13:12.265764  982934 pod_ready.go:81] duration metric: took 68.644666ms for pod "kube-controller-manager-addons-881745" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:12.265780  982934 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6h84v" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:12.358540  982934 pod_ready.go:92] pod "kube-proxy-6h84v" in "kube-system" namespace has status "Ready":"True"
	I0729 13:13:12.358565  982934 pod_ready.go:81] duration metric: took 92.77809ms for pod "kube-proxy-6h84v" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:12.358574  982934 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-881745" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:12.754065  982934 pod_ready.go:92] pod "kube-scheduler-addons-881745" in "kube-system" namespace has status "Ready":"True"
	I0729 13:13:12.754103  982934 pod_ready.go:81] duration metric: took 395.520053ms for pod "kube-scheduler-addons-881745" in "kube-system" namespace to be "Ready" ...
	I0729 13:13:12.754117  982934 pod_ready.go:38] duration metric: took 4.875693531s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:13:12.754139  982934 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:13:12.754217  982934 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:13:14.314978  982934 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0729 13:13:14.315022  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:14.318366  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:14.318765  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:14.318786  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:14.319034  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:14.319278  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:14.319442  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:14.319609  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:14.663843  982934 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0729 13:13:14.867487  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.034040042s)
	I0729 13:13:14.867547  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.867548  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.02103527s)
	I0729 13:13:14.867560  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.867578  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.867591  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.867635  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.002089746s)
	I0729 13:13:14.867682  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.943289426s)
	I0729 13:13:14.867707  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.867730  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.867737  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.941683967s)
	I0729 13:13:14.867683  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.867794  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.867771  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.867835  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.867883  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.867890  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.867898  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.867905  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.868124  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.868140  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.868136  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.513986484s)
	I0729 13:13:14.868151  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.868161  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.868170  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.868184  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.868277  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.502260874s)
	I0729 13:13:14.868280  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.868284  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.868295  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.868322  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.868322  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.868329  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.868335  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.868338  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.460777083s)
	I0729 13:13:14.868343  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.868350  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.868354  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.868364  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.868405  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.868428  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.868436  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.868442  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.868484  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.868490  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.868500  982934 addons.go:475] Verifying addon ingress=true in "addons-881745"
	I0729 13:13:14.868763  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.868795  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.868815  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.868851  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.869046  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.869070  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.869075  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.869082  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.869089  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.869131  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.869151  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.869157  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.869164  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.869170  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.869514  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.937383876s)
	I0729 13:13:14.869543  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.869552  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.869858  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.869882  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.869889  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.869895  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.869902  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.870273  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.870298  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.870304  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.870335  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.870352  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.870358  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.870365  982934 addons.go:475] Verifying addon registry=true in "addons-881745"
	I0729 13:13:14.870752  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.760915055s)
	I0729 13:13:14.870783  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.870795  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.870911  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.365975742s)
	I0729 13:13:14.870936  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.870953  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.871026  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.871054  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.871062  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.871070  982934 addons.go:475] Verifying addon metrics-server=true in "addons-881745"
	I0729 13:13:14.871346  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.871383  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.871391  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.871399  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.871406  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.871470  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.871494  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.871501  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.871508  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.871526  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.872031  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.872062  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.872069  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.873237  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.873266  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.873272  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.874216  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.874227  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.874242  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.874249  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.874250  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.874265  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.874305  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.874231  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.874317  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.874324  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.874414  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.874421  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.874824  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.874897  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.874917  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.874960  982934 out.go:177] * Verifying registry addon...
	I0729 13:13:14.875109  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.875137  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.875144  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.875352  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:14.875382  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.875389  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.875525  982934 out.go:177] * Verifying ingress addon...
	I0729 13:13:14.876940  982934 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-881745 service yakd-dashboard -n yakd-dashboard
	
	I0729 13:13:14.877098  982934 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0729 13:13:14.877659  982934 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0729 13:13:14.959571  982934 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0729 13:13:14.959593  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:14.969105  982934 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0729 13:13:14.969136  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:14.986185  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:14.986209  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:14.986536  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:14.986559  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:14.986603  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	W0729 13:13:14.986687  982934 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0729 13:13:15.020692  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:15.020719  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:15.021124  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:15.021145  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:15.022472  982934 addons.go:234] Setting addon gcp-auth=true in "addons-881745"
	I0729 13:13:15.022533  982934 host.go:66] Checking if "addons-881745" exists ...
	I0729 13:13:15.023001  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:15.023044  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:15.037556  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41103
	I0729 13:13:15.038028  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:15.038600  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:15.038631  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:15.038994  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:15.039575  982934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:13:15.039611  982934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:13:15.053954  982934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43667
	I0729 13:13:15.054403  982934 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:13:15.054941  982934 main.go:141] libmachine: Using API Version  1
	I0729 13:13:15.054972  982934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:13:15.055307  982934 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:13:15.055493  982934 main.go:141] libmachine: (addons-881745) Calling .GetState
	I0729 13:13:15.056952  982934 main.go:141] libmachine: (addons-881745) Calling .DriverName
	I0729 13:13:15.057194  982934 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0729 13:13:15.057221  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHHostname
	I0729 13:13:15.059739  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:15.060179  982934 main.go:141] libmachine: (addons-881745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:39:fc", ip: ""} in network mk-addons-881745: {Iface:virbr1 ExpiryTime:2024-07-29 14:12:25 +0000 UTC Type:0 Mac:52:54:00:c8:39:fc Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-881745 Clientid:01:52:54:00:c8:39:fc}
	I0729 13:13:15.060227  982934 main.go:141] libmachine: (addons-881745) DBG | domain addons-881745 has defined IP address 192.168.39.103 and MAC address 52:54:00:c8:39:fc in network mk-addons-881745
	I0729 13:13:15.060302  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHPort
	I0729 13:13:15.060541  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHKeyPath
	I0729 13:13:15.060702  982934 main.go:141] libmachine: (addons-881745) Calling .GetSSHUsername
	I0729 13:13:15.060832  982934 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/addons-881745/id_rsa Username:docker}
	I0729 13:13:15.405232  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:15.406518  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:15.458661  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.284772514s)
	W0729 13:13:15.458720  982934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 13:13:15.458773  982934 retry.go:31] will retry after 285.489337ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 13:13:15.745019  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 13:13:15.897484  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:15.900064  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:16.381473  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:16.382827  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:16.894309  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:16.894529  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:17.264080  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.831151507s)
	I0729 13:13:17.264149  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:17.264160  982934 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.206940963s)
	I0729 13:13:17.264166  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:17.264086  982934 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.509839577s)
	I0729 13:13:17.264432  982934 api_server.go:72] duration metric: took 10.060599333s to wait for apiserver process to appear ...
	I0729 13:13:17.264444  982934 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:13:17.264466  982934 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0729 13:13:17.264623  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:17.264641  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:17.264652  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:17.264660  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:17.264920  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:17.265035  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:17.265055  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:17.265072  982934 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-881745"
	I0729 13:13:17.266425  982934 out.go:177] * Verifying csi-hostpath-driver addon...
	I0729 13:13:17.266434  982934 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 13:13:17.268449  982934 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0729 13:13:17.269311  982934 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0729 13:13:17.269772  982934 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0729 13:13:17.269794  982934 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0729 13:13:17.286885  982934 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0729 13:13:17.286903  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:17.291000  982934 api_server.go:279] https://192.168.39.103:8443/healthz returned 200:
	ok
	I0729 13:13:17.292944  982934 api_server.go:141] control plane version: v1.30.3
	I0729 13:13:17.292966  982934 api_server.go:131] duration metric: took 28.514891ms to wait for apiserver health ...
	I0729 13:13:17.292975  982934 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:13:17.314853  982934 system_pods.go:59] 19 kube-system pods found
	I0729 13:13:17.314888  982934 system_pods.go:61] "coredns-7db6d8ff4d-bdkkm" [563ebe0c-e2ff-41b8-b21f-cdcd975e3c60] Running
	I0729 13:13:17.314894  982934 system_pods.go:61] "coredns-7db6d8ff4d-nnxtv" [f58f7ace-0e01-40e5-896c-890372ed6c46] Running
	I0729 13:13:17.314905  982934 system_pods.go:61] "csi-hostpath-attacher-0" [649a67d2-d600-48f0-b5c2-2ea497257b21] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0729 13:13:17.314912  982934 system_pods.go:61] "csi-hostpath-resizer-0" [f0cc70bb-8614-4719-96e4-e3a0bc9675cc] Pending
	I0729 13:13:17.314920  982934 system_pods.go:61] "csi-hostpathplugin-g7jgm" [15890aa7-f2ca-4378-82f6-fd2f7c53e367] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0729 13:13:17.314928  982934 system_pods.go:61] "etcd-addons-881745" [6a19058a-445b-42ed-a2ef-1507df695a9b] Running
	I0729 13:13:17.314932  982934 system_pods.go:61] "kube-apiserver-addons-881745" [0e3895dd-a162-40f5-b3eb-4988ff45bc70] Running
	I0729 13:13:17.314935  982934 system_pods.go:61] "kube-controller-manager-addons-881745" [7f441c06-71eb-40d9-b398-c45e2db0b580] Running
	I0729 13:13:17.314940  982934 system_pods.go:61] "kube-ingress-dns-minikube" [3782b3a7-db92-4e0b-9a46-55e0b7de43be] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0729 13:13:17.314944  982934 system_pods.go:61] "kube-proxy-6h84v" [241d7377-0c07-446a-8758-72ee113b1999] Running
	I0729 13:13:17.314947  982934 system_pods.go:61] "kube-scheduler-addons-881745" [1d98f0bf-f865-40c8-a430-0eb7fc1967a0] Running
	I0729 13:13:17.314952  982934 system_pods.go:61] "metrics-server-c59844bb4-5nbcm" [c04ce8b2-943f-4d1d-afd0-7e1d7d17e36f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:13:17.314968  982934 system_pods.go:61] "nvidia-device-plugin-daemonset-2mgsg" [e736085a-8a65-4ef9-a69b-d309fa46e0b7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0729 13:13:17.314976  982934 system_pods.go:61] "registry-656c9c8d9c-z2p5s" [96173b90-f986-42c3-8dab-68759432df0d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0729 13:13:17.314982  982934 system_pods.go:61] "registry-proxy-ljt48" [28156caa-8805-4e7d-a425-0e65cdbb245b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0729 13:13:17.315023  982934 system_pods.go:61] "snapshot-controller-745499f584-kh4vd" [5883f9c2-34cc-4011-880b-d4900758b609] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 13:13:17.315037  982934 system_pods.go:61] "snapshot-controller-745499f584-v464c" [edf1a75b-61a6-4cec-9bcf-df387ceb3aa6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 13:13:17.315041  982934 system_pods.go:61] "storage-provisioner" [b4440573-37d1-4041-a657-1e7338cd83c0] Running
	I0729 13:13:17.315046  982934 system_pods.go:61] "tiller-deploy-6677d64bcd-h58h7" [37213271-b4d7-4a89-bd83-34aacc2ec941] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0729 13:13:17.315051  982934 system_pods.go:74] duration metric: took 22.071584ms to wait for pod list to return data ...
	I0729 13:13:17.315067  982934 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:13:17.323857  982934 default_sa.go:45] found service account: "default"
	I0729 13:13:17.323883  982934 default_sa.go:55] duration metric: took 8.805986ms for default service account to be created ...
	I0729 13:13:17.323895  982934 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 13:13:17.340868  982934 system_pods.go:86] 19 kube-system pods found
	I0729 13:13:17.340895  982934 system_pods.go:89] "coredns-7db6d8ff4d-bdkkm" [563ebe0c-e2ff-41b8-b21f-cdcd975e3c60] Running
	I0729 13:13:17.340900  982934 system_pods.go:89] "coredns-7db6d8ff4d-nnxtv" [f58f7ace-0e01-40e5-896c-890372ed6c46] Running
	I0729 13:13:17.340907  982934 system_pods.go:89] "csi-hostpath-attacher-0" [649a67d2-d600-48f0-b5c2-2ea497257b21] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0729 13:13:17.340912  982934 system_pods.go:89] "csi-hostpath-resizer-0" [f0cc70bb-8614-4719-96e4-e3a0bc9675cc] Pending
	I0729 13:13:17.340922  982934 system_pods.go:89] "csi-hostpathplugin-g7jgm" [15890aa7-f2ca-4378-82f6-fd2f7c53e367] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0729 13:13:17.340927  982934 system_pods.go:89] "etcd-addons-881745" [6a19058a-445b-42ed-a2ef-1507df695a9b] Running
	I0729 13:13:17.340931  982934 system_pods.go:89] "kube-apiserver-addons-881745" [0e3895dd-a162-40f5-b3eb-4988ff45bc70] Running
	I0729 13:13:17.340935  982934 system_pods.go:89] "kube-controller-manager-addons-881745" [7f441c06-71eb-40d9-b398-c45e2db0b580] Running
	I0729 13:13:17.340970  982934 system_pods.go:89] "kube-ingress-dns-minikube" [3782b3a7-db92-4e0b-9a46-55e0b7de43be] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0729 13:13:17.340977  982934 system_pods.go:89] "kube-proxy-6h84v" [241d7377-0c07-446a-8758-72ee113b1999] Running
	I0729 13:13:17.340982  982934 system_pods.go:89] "kube-scheduler-addons-881745" [1d98f0bf-f865-40c8-a430-0eb7fc1967a0] Running
	I0729 13:13:17.340991  982934 system_pods.go:89] "metrics-server-c59844bb4-5nbcm" [c04ce8b2-943f-4d1d-afd0-7e1d7d17e36f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 13:13:17.340999  982934 system_pods.go:89] "nvidia-device-plugin-daemonset-2mgsg" [e736085a-8a65-4ef9-a69b-d309fa46e0b7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0729 13:13:17.341007  982934 system_pods.go:89] "registry-656c9c8d9c-z2p5s" [96173b90-f986-42c3-8dab-68759432df0d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0729 13:13:17.341016  982934 system_pods.go:89] "registry-proxy-ljt48" [28156caa-8805-4e7d-a425-0e65cdbb245b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0729 13:13:17.341023  982934 system_pods.go:89] "snapshot-controller-745499f584-kh4vd" [5883f9c2-34cc-4011-880b-d4900758b609] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 13:13:17.341032  982934 system_pods.go:89] "snapshot-controller-745499f584-v464c" [edf1a75b-61a6-4cec-9bcf-df387ceb3aa6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 13:13:17.341038  982934 system_pods.go:89] "storage-provisioner" [b4440573-37d1-4041-a657-1e7338cd83c0] Running
	I0729 13:13:17.341044  982934 system_pods.go:89] "tiller-deploy-6677d64bcd-h58h7" [37213271-b4d7-4a89-bd83-34aacc2ec941] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0729 13:13:17.341052  982934 system_pods.go:126] duration metric: took 17.152648ms to wait for k8s-apps to be running ...
	I0729 13:13:17.341060  982934 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 13:13:17.341112  982934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:13:17.384237  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:17.384237  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:17.421922  982934 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0729 13:13:17.421948  982934 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0729 13:13:17.466758  982934 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 13:13:17.466783  982934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0729 13:13:17.518271  982934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 13:13:17.775039  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:17.890102  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:17.893184  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:18.283762  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:18.383359  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:18.383925  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:18.778455  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:18.888428  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:18.888691  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:19.307224  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:19.514046  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:19.514224  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:19.682214  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.937129618s)
	I0729 13:13:19.682295  982934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.163988798s)
	I0729 13:13:19.682231  982934 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.341094592s)
	I0729 13:13:19.682324  982934 system_svc.go:56] duration metric: took 2.341257823s WaitForService to wait for kubelet
	I0729 13:13:19.682330  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:19.682296  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:19.682337  982934 kubeadm.go:582] duration metric: took 12.478505048s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:13:19.682363  982934 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:13:19.682370  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:19.682344  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:19.682732  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:19.682748  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:19.682757  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:19.682764  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:19.682794  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:19.682807  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:19.682817  982934 main.go:141] libmachine: Making call to close driver server
	I0729 13:13:19.682824  982934 main.go:141] libmachine: (addons-881745) Calling .Close
	I0729 13:13:19.683002  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:19.683032  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:19.683052  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:19.683279  982934 main.go:141] libmachine: (addons-881745) DBG | Closing plugin on server side
	I0729 13:13:19.683297  982934 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:13:19.683343  982934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:13:19.685392  982934 addons.go:475] Verifying addon gcp-auth=true in "addons-881745"
	I0729 13:13:19.686168  982934 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:13:19.686191  982934 node_conditions.go:123] node cpu capacity is 2
	I0729 13:13:19.686220  982934 node_conditions.go:105] duration metric: took 3.851629ms to run NodePressure ...
	I0729 13:13:19.686233  982934 start.go:241] waiting for startup goroutines ...
	I0729 13:13:19.687268  982934 out.go:177] * Verifying gcp-auth addon...
	I0729 13:13:19.689094  982934 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0729 13:13:19.691691  982934 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0729 13:13:19.691715  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:19.776028  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:19.883070  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:19.883150  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:20.193336  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:20.275399  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:20.381528  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:20.383169  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:20.692828  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:20.774717  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:20.882907  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:20.883346  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:21.193379  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:21.275601  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:21.382481  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:21.383526  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:21.692723  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:21.774559  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:21.883035  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:21.883356  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:22.193320  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:22.274340  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:22.381556  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:22.382023  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:22.693234  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:22.775303  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:22.882659  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:22.883039  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:23.193723  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:23.276738  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:23.382422  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:23.383346  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:23.704620  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:23.781894  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:23.884891  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:23.893294  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:24.192235  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:24.275461  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:24.383087  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:24.385480  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:24.693346  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:24.776072  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:24.882338  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:24.882783  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:25.193858  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:25.274918  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:25.381837  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:25.382014  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:25.692786  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:25.775077  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:25.881546  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:25.884162  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:26.193297  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:26.275806  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:26.383907  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:26.385119  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:26.694746  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:26.791765  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:26.886530  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:26.886603  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:27.192888  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:27.275148  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:27.383014  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:27.383912  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:27.693091  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:27.775402  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:27.882482  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:27.882754  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:28.193298  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:28.275721  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:28.382370  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:28.382715  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:28.693369  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:28.775534  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:28.883270  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:28.884568  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:29.192692  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:29.274549  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:29.383853  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:29.383870  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:29.693778  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:29.774962  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:29.881889  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:29.883706  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:30.194648  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:30.274556  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:30.383645  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:30.384043  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:30.693649  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:30.774474  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:30.882968  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:30.883017  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:31.193312  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:31.274816  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:31.383399  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:31.383767  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:31.693327  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:31.777772  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:31.889831  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:31.890169  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:32.430376  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:32.432235  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:32.432842  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:32.437523  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:32.692973  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:32.777587  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:32.881747  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:32.881915  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:33.196437  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:33.276270  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:33.382947  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:33.383273  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:33.693407  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:33.778477  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:33.883075  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:33.883246  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:34.193149  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:34.275056  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:34.381610  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:34.383721  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:34.693092  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:34.776425  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:34.882397  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:34.883899  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:35.193085  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:35.274932  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:35.382489  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:35.383556  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:35.692743  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:35.775705  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:35.882501  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:35.883777  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:36.194101  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:36.276189  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:36.383065  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:36.384728  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:36.868266  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:36.871038  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:36.886866  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:36.891747  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:37.192486  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:37.274478  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:37.382343  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:37.382723  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:37.693014  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:37.775170  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:37.882451  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:37.883211  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:38.193740  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:38.274996  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:38.381925  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:38.382129  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:38.693603  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:38.775499  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:38.881966  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:38.882190  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:39.192804  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:39.274748  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:39.382711  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:39.384368  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:39.692720  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:39.774007  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:39.886819  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:39.887299  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:40.193732  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:40.275047  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:40.381732  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:40.382055  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:40.693046  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:40.774944  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:40.881771  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:40.884187  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:41.193097  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:41.275266  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:41.396081  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:41.396232  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:41.693289  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:41.775228  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:41.881737  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:41.882535  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:42.193251  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:42.278235  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:42.382667  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:42.382891  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:42.693291  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:42.775796  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:42.885294  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:42.886049  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:43.193173  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:43.275752  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:43.384242  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:43.384277  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:43.693753  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:43.775167  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:43.882253  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:43.882752  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:44.193512  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:44.275231  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:44.382334  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:44.382936  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:44.693200  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:44.775041  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:44.883043  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:44.883474  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:45.417393  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:45.417592  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:45.419351  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:45.419352  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:45.692776  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:45.774895  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:45.881496  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:45.881798  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:46.192890  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:46.274909  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:46.381344  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:46.383624  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:46.692754  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:46.774623  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:46.884609  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:46.884714  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:47.193348  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:47.279219  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:47.382443  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:47.382849  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 13:13:47.693026  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:47.775452  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:47.882374  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:47.882785  982934 kapi.go:107] duration metric: took 33.005683413s to wait for kubernetes.io/minikube-addons=registry ...
	I0729 13:13:48.192569  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:48.275025  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:48.382143  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:48.692949  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:48.774981  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:48.882246  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:49.193149  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:49.274735  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:49.382049  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:49.693566  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:49.776841  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:49.881970  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:50.194637  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:50.281926  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:50.382819  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:50.709807  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:50.774947  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:50.883668  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:51.193603  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:51.276033  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:51.382834  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:51.693445  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:51.775413  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:51.882181  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:52.193476  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:52.275720  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:52.382834  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:52.692713  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:52.775092  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:52.882646  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:53.192342  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:53.275396  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:53.382363  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:54.114655  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:54.117626  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:54.117763  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:54.195391  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:54.275346  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:54.383382  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:54.693627  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:54.780263  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:54.884220  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:55.194295  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:55.277659  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:55.382526  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:55.693350  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:55.779285  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:55.882317  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:56.193327  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:56.275385  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:56.382755  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:56.692529  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:56.775718  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:56.881488  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:57.193664  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:57.275247  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:57.382825  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:57.720255  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:57.774657  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:57.882202  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:58.193552  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:58.274726  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:58.381957  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:58.692678  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:58.774253  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:58.882263  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:59.193550  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:59.283675  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:59.382216  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:13:59.693151  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:13:59.775659  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:13:59.882075  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:00.193442  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:00.275995  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:00.382454  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:00.695123  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:00.774527  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:00.881518  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:01.193471  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:01.275601  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:01.382683  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:01.692273  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:01.775941  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:02.164538  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:02.192990  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:02.279329  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:02.383399  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:02.693143  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:02.775014  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:02.884251  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:03.193218  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:03.275728  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:03.381920  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:03.693061  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:03.775159  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:03.883212  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:04.192769  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:04.274897  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:04.381707  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:04.693067  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:04.774847  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:04.888252  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:05.193212  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:05.275371  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:05.382645  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:05.692240  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:05.776008  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:05.882068  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:06.194598  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:06.282423  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:06.386209  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:06.693448  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:06.775499  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:06.881497  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:07.199610  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:07.275108  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:07.453674  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:07.692339  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:07.777239  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:07.882681  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:08.192766  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:08.276300  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:08.382147  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:08.693540  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:08.777890  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:08.882444  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:09.193550  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:09.275538  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:09.381928  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:09.693019  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:09.775521  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:09.881737  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:10.194431  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:10.278485  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:10.382681  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:10.702207  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:10.775869  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:10.885357  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:11.193498  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:11.276145  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:11.385000  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:11.692793  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:11.774381  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:11.882056  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:12.193740  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:12.276837  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:12.387087  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:12.693070  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:12.775126  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:12.883143  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:13.201156  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:13.275680  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:13.381929  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:13.693806  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:13.777329  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:13.888690  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:14.199375  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:14.276597  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:14.391566  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:14.693841  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:14.774770  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:14.882446  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:15.193995  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:15.275271  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:15.383013  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:15.694756  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:15.774371  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:15.886564  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:16.193635  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:16.275841  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:16.382157  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:16.693077  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:16.775466  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:16.883432  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:17.192594  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:17.275217  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:17.382102  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:17.701847  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:17.776902  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 13:14:17.882190  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:18.193140  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:18.275422  982934 kapi.go:107] duration metric: took 1m1.006107116s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0729 13:14:18.381898  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:18.693507  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:18.881913  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:19.192606  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:19.382432  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:19.693567  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:19.882857  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:20.192887  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:20.382969  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:20.692504  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:20.882778  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:21.194002  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:21.382056  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:21.693117  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:21.882874  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:22.192521  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:22.382879  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:22.694036  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:22.883382  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:23.495876  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:23.496146  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:23.693393  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:23.882894  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:24.193110  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:24.382191  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:24.692722  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:24.882719  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:25.192490  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:25.382512  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:25.693198  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:26.125344  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:26.193682  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:26.384047  982934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 13:14:26.693393  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:26.884223  982934 kapi.go:107] duration metric: took 1m12.006560087s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0729 13:14:27.192375  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:27.693270  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:28.192543  982934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 13:14:28.692834  982934 kapi.go:107] duration metric: took 1m9.003734279s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0729 13:14:28.694584  982934 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-881745 cluster.
	I0729 13:14:28.696034  982934 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0729 13:14:28.697267  982934 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0729 13:14:28.698594  982934 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, metrics-server, inspektor-gadget, nvidia-device-plugin, ingress-dns, helm-tiller, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0729 13:14:28.700296  982934 addons.go:510] duration metric: took 1m21.496411734s for enable addons: enabled=[cloud-spanner storage-provisioner metrics-server inspektor-gadget nvidia-device-plugin ingress-dns helm-tiller yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0729 13:14:28.700348  982934 start.go:246] waiting for cluster config update ...
	I0729 13:14:28.700375  982934 start.go:255] writing updated cluster config ...
	I0729 13:14:28.700691  982934 ssh_runner.go:195] Run: rm -f paused
	I0729 13:14:28.755794  982934 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 13:14:28.757654  982934 out.go:177] * Done! kubectl is now configured to use "addons-881745" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 13:20:41 addons-881745 crio[678]: time="2024-07-29 13:20:41.821655058Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722259241821626314,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ba67ea4-b217-4a4c-94dd-1071a037f020 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:20:41 addons-881745 crio[678]: time="2024-07-29 13:20:41.822303345Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97ee46af-239b-48c6-bc40-69d7a1a926e0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:20:41 addons-881745 crio[678]: time="2024-07-29 13:20:41.822429576Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97ee46af-239b-48c6-bc40-69d7a1a926e0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:20:41 addons-881745 crio[678]: time="2024-07-29 13:20:41.822669331Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03956f3a1a8ac3750c40bbced60c54d9439a663565cb2985230d0c6349ac71e6,PodSandboxId:1e63b5c05279bc65911efc46cb36a4f0870c1de46d1ef3e55684feb2f714e9c7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722259069614795418,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-52v9x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 472c31e0-c2e1-412b-97f0-1556647833f8,},Annotations:map[string]string{io.kubernetes.container.hash: 6574499,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4170db4eaf6862d30bd6cf8dd7df7fab13d695f14f3469054a1a3416d0cb1090,PodSandboxId:6ff338d74a1d65f6c48602e0b890605cb8f7c6e4af93550c2b4dd43e90c387f1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722258929875618849,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 403be47e-4fc8-4b4a-92f4-57da8aa66907,},Annotations:map[string]string{io.kubernete
s.container.hash: a9ca56f8,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd582e26ab76a87df81618e94fd8911acbfa53120eb2ffa43337935feef74266,PodSandboxId:1d9d83f0ae40b93bf932d0aedaf05b8058d90f958605e2698d19722186362211,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722258872239203544,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 687ccff9-8f64-4657-a5
49-dd263a4d8b14,},Annotations:map[string]string{io.kubernetes.container.hash: faf43a33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47676082808be0445d4754f760e2bdb551f8ee495159616efc180e768fb90437,PodSandboxId:d221ea69d4276dea6a4b1cd2b5ed17b45667f1314ad3f625584be89e17f3ff7e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722258829637465966,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5nbcm,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: c04ce8b2-943f-4d1d-afd0-7e1d7d17e36f,},Annotations:map[string]string{io.kubernetes.container.hash: d49ff5d5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9c7c2c07dfe1388af48c648ad63186b6ffd23d651ffa0e698cd5c45c04b5d7,PodSandboxId:b381e1ba8f57616cdf6b71685f775baf346c16b6b016fa6e99d6fb8281155773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722258794805286033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,i
o.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4440573-37d1-4041-a657-1e7338cd83c0,},Annotations:map[string]string{io.kubernetes.container.hash: 26a0b021,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2fdb92d3b939ff0d82e7e669537c4b030011f2cd1ef0b335d664124ed23fb02,PodSandboxId:56c539f8ea97ceec92269fadc899bd84ed43572eeea2fa4f7d088e9054959f63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722258790345619597,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d
8ff4d-bdkkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 563ebe0c-e2ff-41b8-b21f-cdcd975e3c60,},Annotations:map[string]string{io.kubernetes.container.hash: ff128fc9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45d6021d310a9b663fc97ab571c53fcea84ad5fd8a261a58c4d4dc6c7b7e4e37,PodSandboxId:7d2a402f353cd23f877482699c9477c1cbb1adc625741bd158f94dff2fc13b1b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d
01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722258788174948470,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6h84v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 241d7377-0c07-446a-8758-72ee113b1999,},Annotations:map[string]string{io.kubernetes.container.hash: 6ccdd6d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f925986f1ac40320f580812ec0670960f90aef7f94cbe4cdcecac176ec9d3ba8,PodSandboxId:b374475df602822d212ec9948162991d11520a52c57f58326e656c710bbc090a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75
a899,State:CONTAINER_RUNNING,CreatedAt:1722258768602478955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df7eb1b39c9f83f6aef26e5aa39bf30,},Annotations:map[string]string{io.kubernetes.container.hash: 232cf87a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b62cddfa98d32bab6392002bf26a28a2c77262b2bc184c1c946a4fa296c1423,PodSandboxId:71a81d84e6960da9a4615d5f8142d1971ef808083ba1196848b433145a1dd8de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedA
t:1722258768579427222,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dea9dd953528f9555b7b1165a4eb4160,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecaab0389284a5611a2f3f280328a29c744e39a1101a6f90291b6b64a626591e,PodSandboxId:7c94e609457546c62f1cc54fd2c04af39c9a2fbc72d3ac7434868f037ed26816,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:172225876856384
1342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cad020bf239eb10f745c57431ca19780,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b6ec32f0b2844f9f32d047fe7c0558f2ef70335b8bb8412a959550fa92a02c0,PodSandboxId:d8531751d3b1f1efe60e3c9cc0c84dd258cbe682fdc1aee6db7367efa5999249,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722258768538556224,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09433823db087c4b78e1f4444c870f1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=97ee46af-239b-48c6-bc40-69d7a1a926e0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:20:41 addons-881745 crio[678]: time="2024-07-29 13:20:41.860189231Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b85997dc-558c-4e61-bb2d-a32a3732728e name=/runtime.v1.RuntimeService/Version
	Jul 29 13:20:41 addons-881745 crio[678]: time="2024-07-29 13:20:41.860263496Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b85997dc-558c-4e61-bb2d-a32a3732728e name=/runtime.v1.RuntimeService/Version
	Jul 29 13:20:41 addons-881745 crio[678]: time="2024-07-29 13:20:41.861269672Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c306620-d8d5-40dd-ac7b-64e08d1fa984 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:20:41 addons-881745 crio[678]: time="2024-07-29 13:20:41.862780122Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722259241862756263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c306620-d8d5-40dd-ac7b-64e08d1fa984 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:20:41 addons-881745 crio[678]: time="2024-07-29 13:20:41.863405882Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4d3b8b3-0c85-4e1a-935d-b33400f03f47 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:20:41 addons-881745 crio[678]: time="2024-07-29 13:20:41.863479784Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4d3b8b3-0c85-4e1a-935d-b33400f03f47 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:20:41 addons-881745 crio[678]: time="2024-07-29 13:20:41.863730365Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03956f3a1a8ac3750c40bbced60c54d9439a663565cb2985230d0c6349ac71e6,PodSandboxId:1e63b5c05279bc65911efc46cb36a4f0870c1de46d1ef3e55684feb2f714e9c7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722259069614795418,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-52v9x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 472c31e0-c2e1-412b-97f0-1556647833f8,},Annotations:map[string]string{io.kubernetes.container.hash: 6574499,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4170db4eaf6862d30bd6cf8dd7df7fab13d695f14f3469054a1a3416d0cb1090,PodSandboxId:6ff338d74a1d65f6c48602e0b890605cb8f7c6e4af93550c2b4dd43e90c387f1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722258929875618849,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 403be47e-4fc8-4b4a-92f4-57da8aa66907,},Annotations:map[string]string{io.kubernete
s.container.hash: a9ca56f8,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd582e26ab76a87df81618e94fd8911acbfa53120eb2ffa43337935feef74266,PodSandboxId:1d9d83f0ae40b93bf932d0aedaf05b8058d90f958605e2698d19722186362211,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722258872239203544,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 687ccff9-8f64-4657-a5
49-dd263a4d8b14,},Annotations:map[string]string{io.kubernetes.container.hash: faf43a33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47676082808be0445d4754f760e2bdb551f8ee495159616efc180e768fb90437,PodSandboxId:d221ea69d4276dea6a4b1cd2b5ed17b45667f1314ad3f625584be89e17f3ff7e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722258829637465966,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5nbcm,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: c04ce8b2-943f-4d1d-afd0-7e1d7d17e36f,},Annotations:map[string]string{io.kubernetes.container.hash: d49ff5d5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9c7c2c07dfe1388af48c648ad63186b6ffd23d651ffa0e698cd5c45c04b5d7,PodSandboxId:b381e1ba8f57616cdf6b71685f775baf346c16b6b016fa6e99d6fb8281155773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722258794805286033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,i
o.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4440573-37d1-4041-a657-1e7338cd83c0,},Annotations:map[string]string{io.kubernetes.container.hash: 26a0b021,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2fdb92d3b939ff0d82e7e669537c4b030011f2cd1ef0b335d664124ed23fb02,PodSandboxId:56c539f8ea97ceec92269fadc899bd84ed43572eeea2fa4f7d088e9054959f63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722258790345619597,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d
8ff4d-bdkkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 563ebe0c-e2ff-41b8-b21f-cdcd975e3c60,},Annotations:map[string]string{io.kubernetes.container.hash: ff128fc9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45d6021d310a9b663fc97ab571c53fcea84ad5fd8a261a58c4d4dc6c7b7e4e37,PodSandboxId:7d2a402f353cd23f877482699c9477c1cbb1adc625741bd158f94dff2fc13b1b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d
01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722258788174948470,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6h84v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 241d7377-0c07-446a-8758-72ee113b1999,},Annotations:map[string]string{io.kubernetes.container.hash: 6ccdd6d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f925986f1ac40320f580812ec0670960f90aef7f94cbe4cdcecac176ec9d3ba8,PodSandboxId:b374475df602822d212ec9948162991d11520a52c57f58326e656c710bbc090a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75
a899,State:CONTAINER_RUNNING,CreatedAt:1722258768602478955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df7eb1b39c9f83f6aef26e5aa39bf30,},Annotations:map[string]string{io.kubernetes.container.hash: 232cf87a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b62cddfa98d32bab6392002bf26a28a2c77262b2bc184c1c946a4fa296c1423,PodSandboxId:71a81d84e6960da9a4615d5f8142d1971ef808083ba1196848b433145a1dd8de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedA
t:1722258768579427222,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dea9dd953528f9555b7b1165a4eb4160,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecaab0389284a5611a2f3f280328a29c744e39a1101a6f90291b6b64a626591e,PodSandboxId:7c94e609457546c62f1cc54fd2c04af39c9a2fbc72d3ac7434868f037ed26816,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:172225876856384
1342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cad020bf239eb10f745c57431ca19780,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b6ec32f0b2844f9f32d047fe7c0558f2ef70335b8bb8412a959550fa92a02c0,PodSandboxId:d8531751d3b1f1efe60e3c9cc0c84dd258cbe682fdc1aee6db7367efa5999249,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722258768538556224,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09433823db087c4b78e1f4444c870f1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f4d3b8b3-0c85-4e1a-935d-b33400f03f47 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:20:41 addons-881745 crio[678]: time="2024-07-29 13:20:41.900155109Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d0b80d61-51a0-435c-8e98-a846bdb03018 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:20:41 addons-881745 crio[678]: time="2024-07-29 13:20:41.900240089Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d0b80d61-51a0-435c-8e98-a846bdb03018 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:20:41 addons-881745 crio[678]: time="2024-07-29 13:20:41.901900772Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8db31940-2521-4e70-b36d-aba6cdb4c49e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:20:41 addons-881745 crio[678]: time="2024-07-29 13:20:41.903189863Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722259241903161355,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8db31940-2521-4e70-b36d-aba6cdb4c49e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:20:41 addons-881745 crio[678]: time="2024-07-29 13:20:41.904127313Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=145e235a-fc98-4459-95d2-83782136a7b5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:20:41 addons-881745 crio[678]: time="2024-07-29 13:20:41.904181903Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=145e235a-fc98-4459-95d2-83782136a7b5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:20:41 addons-881745 crio[678]: time="2024-07-29 13:20:41.904479305Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03956f3a1a8ac3750c40bbced60c54d9439a663565cb2985230d0c6349ac71e6,PodSandboxId:1e63b5c05279bc65911efc46cb36a4f0870c1de46d1ef3e55684feb2f714e9c7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722259069614795418,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-52v9x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 472c31e0-c2e1-412b-97f0-1556647833f8,},Annotations:map[string]string{io.kubernetes.container.hash: 6574499,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4170db4eaf6862d30bd6cf8dd7df7fab13d695f14f3469054a1a3416d0cb1090,PodSandboxId:6ff338d74a1d65f6c48602e0b890605cb8f7c6e4af93550c2b4dd43e90c387f1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722258929875618849,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 403be47e-4fc8-4b4a-92f4-57da8aa66907,},Annotations:map[string]string{io.kubernete
s.container.hash: a9ca56f8,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd582e26ab76a87df81618e94fd8911acbfa53120eb2ffa43337935feef74266,PodSandboxId:1d9d83f0ae40b93bf932d0aedaf05b8058d90f958605e2698d19722186362211,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722258872239203544,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 687ccff9-8f64-4657-a5
49-dd263a4d8b14,},Annotations:map[string]string{io.kubernetes.container.hash: faf43a33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47676082808be0445d4754f760e2bdb551f8ee495159616efc180e768fb90437,PodSandboxId:d221ea69d4276dea6a4b1cd2b5ed17b45667f1314ad3f625584be89e17f3ff7e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722258829637465966,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5nbcm,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: c04ce8b2-943f-4d1d-afd0-7e1d7d17e36f,},Annotations:map[string]string{io.kubernetes.container.hash: d49ff5d5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9c7c2c07dfe1388af48c648ad63186b6ffd23d651ffa0e698cd5c45c04b5d7,PodSandboxId:b381e1ba8f57616cdf6b71685f775baf346c16b6b016fa6e99d6fb8281155773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722258794805286033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,i
o.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4440573-37d1-4041-a657-1e7338cd83c0,},Annotations:map[string]string{io.kubernetes.container.hash: 26a0b021,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2fdb92d3b939ff0d82e7e669537c4b030011f2cd1ef0b335d664124ed23fb02,PodSandboxId:56c539f8ea97ceec92269fadc899bd84ed43572eeea2fa4f7d088e9054959f63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722258790345619597,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d
8ff4d-bdkkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 563ebe0c-e2ff-41b8-b21f-cdcd975e3c60,},Annotations:map[string]string{io.kubernetes.container.hash: ff128fc9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45d6021d310a9b663fc97ab571c53fcea84ad5fd8a261a58c4d4dc6c7b7e4e37,PodSandboxId:7d2a402f353cd23f877482699c9477c1cbb1adc625741bd158f94dff2fc13b1b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d
01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722258788174948470,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6h84v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 241d7377-0c07-446a-8758-72ee113b1999,},Annotations:map[string]string{io.kubernetes.container.hash: 6ccdd6d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f925986f1ac40320f580812ec0670960f90aef7f94cbe4cdcecac176ec9d3ba8,PodSandboxId:b374475df602822d212ec9948162991d11520a52c57f58326e656c710bbc090a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75
a899,State:CONTAINER_RUNNING,CreatedAt:1722258768602478955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df7eb1b39c9f83f6aef26e5aa39bf30,},Annotations:map[string]string{io.kubernetes.container.hash: 232cf87a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b62cddfa98d32bab6392002bf26a28a2c77262b2bc184c1c946a4fa296c1423,PodSandboxId:71a81d84e6960da9a4615d5f8142d1971ef808083ba1196848b433145a1dd8de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedA
t:1722258768579427222,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dea9dd953528f9555b7b1165a4eb4160,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecaab0389284a5611a2f3f280328a29c744e39a1101a6f90291b6b64a626591e,PodSandboxId:7c94e609457546c62f1cc54fd2c04af39c9a2fbc72d3ac7434868f037ed26816,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:172225876856384
1342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cad020bf239eb10f745c57431ca19780,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b6ec32f0b2844f9f32d047fe7c0558f2ef70335b8bb8412a959550fa92a02c0,PodSandboxId:d8531751d3b1f1efe60e3c9cc0c84dd258cbe682fdc1aee6db7367efa5999249,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722258768538556224,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09433823db087c4b78e1f4444c870f1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=145e235a-fc98-4459-95d2-83782136a7b5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:20:41 addons-881745 crio[678]: time="2024-07-29 13:20:41.937912727Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da830cf5-e38a-4283-965c-0b0c771720e6 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:20:41 addons-881745 crio[678]: time="2024-07-29 13:20:41.938003150Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da830cf5-e38a-4283-965c-0b0c771720e6 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:20:41 addons-881745 crio[678]: time="2024-07-29 13:20:41.939558693Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76cb64ad-53fa-4bcf-b70c-e51919507ed7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:20:41 addons-881745 crio[678]: time="2024-07-29 13:20:41.940949683Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722259241940921328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76cb64ad-53fa-4bcf-b70c-e51919507ed7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:20:41 addons-881745 crio[678]: time="2024-07-29 13:20:41.941746519Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c55e1f5c-b96d-4193-9c78-ffc402b3b0d2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:20:41 addons-881745 crio[678]: time="2024-07-29 13:20:41.941805787Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c55e1f5c-b96d-4193-9c78-ffc402b3b0d2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:20:41 addons-881745 crio[678]: time="2024-07-29 13:20:41.942073153Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03956f3a1a8ac3750c40bbced60c54d9439a663565cb2985230d0c6349ac71e6,PodSandboxId:1e63b5c05279bc65911efc46cb36a4f0870c1de46d1ef3e55684feb2f714e9c7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722259069614795418,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-52v9x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 472c31e0-c2e1-412b-97f0-1556647833f8,},Annotations:map[string]string{io.kubernetes.container.hash: 6574499,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4170db4eaf6862d30bd6cf8dd7df7fab13d695f14f3469054a1a3416d0cb1090,PodSandboxId:6ff338d74a1d65f6c48602e0b890605cb8f7c6e4af93550c2b4dd43e90c387f1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722258929875618849,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 403be47e-4fc8-4b4a-92f4-57da8aa66907,},Annotations:map[string]string{io.kubernete
s.container.hash: a9ca56f8,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd582e26ab76a87df81618e94fd8911acbfa53120eb2ffa43337935feef74266,PodSandboxId:1d9d83f0ae40b93bf932d0aedaf05b8058d90f958605e2698d19722186362211,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722258872239203544,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 687ccff9-8f64-4657-a5
49-dd263a4d8b14,},Annotations:map[string]string{io.kubernetes.container.hash: faf43a33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47676082808be0445d4754f760e2bdb551f8ee495159616efc180e768fb90437,PodSandboxId:d221ea69d4276dea6a4b1cd2b5ed17b45667f1314ad3f625584be89e17f3ff7e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722258829637465966,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5nbcm,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: c04ce8b2-943f-4d1d-afd0-7e1d7d17e36f,},Annotations:map[string]string{io.kubernetes.container.hash: d49ff5d5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9c7c2c07dfe1388af48c648ad63186b6ffd23d651ffa0e698cd5c45c04b5d7,PodSandboxId:b381e1ba8f57616cdf6b71685f775baf346c16b6b016fa6e99d6fb8281155773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722258794805286033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,i
o.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4440573-37d1-4041-a657-1e7338cd83c0,},Annotations:map[string]string{io.kubernetes.container.hash: 26a0b021,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2fdb92d3b939ff0d82e7e669537c4b030011f2cd1ef0b335d664124ed23fb02,PodSandboxId:56c539f8ea97ceec92269fadc899bd84ed43572eeea2fa4f7d088e9054959f63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722258790345619597,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d
8ff4d-bdkkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 563ebe0c-e2ff-41b8-b21f-cdcd975e3c60,},Annotations:map[string]string{io.kubernetes.container.hash: ff128fc9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45d6021d310a9b663fc97ab571c53fcea84ad5fd8a261a58c4d4dc6c7b7e4e37,PodSandboxId:7d2a402f353cd23f877482699c9477c1cbb1adc625741bd158f94dff2fc13b1b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d
01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722258788174948470,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6h84v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 241d7377-0c07-446a-8758-72ee113b1999,},Annotations:map[string]string{io.kubernetes.container.hash: 6ccdd6d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f925986f1ac40320f580812ec0670960f90aef7f94cbe4cdcecac176ec9d3ba8,PodSandboxId:b374475df602822d212ec9948162991d11520a52c57f58326e656c710bbc090a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75
a899,State:CONTAINER_RUNNING,CreatedAt:1722258768602478955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df7eb1b39c9f83f6aef26e5aa39bf30,},Annotations:map[string]string{io.kubernetes.container.hash: 232cf87a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b62cddfa98d32bab6392002bf26a28a2c77262b2bc184c1c946a4fa296c1423,PodSandboxId:71a81d84e6960da9a4615d5f8142d1971ef808083ba1196848b433145a1dd8de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedA
t:1722258768579427222,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dea9dd953528f9555b7b1165a4eb4160,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecaab0389284a5611a2f3f280328a29c744e39a1101a6f90291b6b64a626591e,PodSandboxId:7c94e609457546c62f1cc54fd2c04af39c9a2fbc72d3ac7434868f037ed26816,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:172225876856384
1342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cad020bf239eb10f745c57431ca19780,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b6ec32f0b2844f9f32d047fe7c0558f2ef70335b8bb8412a959550fa92a02c0,PodSandboxId:d8531751d3b1f1efe60e3c9cc0c84dd258cbe682fdc1aee6db7367efa5999249,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722258768538556224,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-881745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09433823db087c4b78e1f4444c870f1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c55e1f5c-b96d-4193-9c78-ffc402b3b0d2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	03956f3a1a8ac       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   1e63b5c05279b       hello-world-app-6778b5fc9f-52v9x
	4170db4eaf686       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         5 minutes ago       Running             nginx                     0                   6ff338d74a1d6       nginx
	dd582e26ab76a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   1d9d83f0ae40b       busybox
	47676082808be       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   6 minutes ago       Running             metrics-server            0                   d221ea69d4276       metrics-server-c59844bb4-5nbcm
	5a9c7c2c07dfe       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   b381e1ba8f576       storage-provisioner
	a2fdb92d3b939       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   56c539f8ea97c       coredns-7db6d8ff4d-bdkkm
	45d6021d310a9       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                        7 minutes ago       Running             kube-proxy                0                   7d2a402f353cd       kube-proxy-6h84v
	f925986f1ac40       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        7 minutes ago       Running             etcd                      0                   b374475df6028       etcd-addons-881745
	8b62cddfa98d3       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                        7 minutes ago       Running             kube-scheduler            0                   71a81d84e6960       kube-scheduler-addons-881745
	ecaab0389284a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                        7 minutes ago       Running             kube-apiserver            0                   7c94e60945754       kube-apiserver-addons-881745
	8b6ec32f0b284       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                        7 minutes ago       Running             kube-controller-manager   0                   d8531751d3b1f       kube-controller-manager-addons-881745
	
	
	==> coredns [a2fdb92d3b939ff0d82e7e669537c4b030011f2cd1ef0b335d664124ed23fb02] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:38998 - 21548 "HINFO IN 7096452714176680965.4973821805623616437. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011702965s
	[INFO] 10.244.0.22:36835 - 26671 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000519466s
	[INFO] 10.244.0.22:33177 - 62168 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000403977s
	[INFO] 10.244.0.22:48829 - 46584 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000114311s
	[INFO] 10.244.0.22:55456 - 18332 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000062959s
	[INFO] 10.244.0.22:48083 - 37568 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000082s
	[INFO] 10.244.0.22:39012 - 24425 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000051648s
	[INFO] 10.244.0.22:34952 - 52667 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000405899s
	[INFO] 10.244.0.22:49399 - 47521 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.000900806s
	[INFO] 10.244.0.27:59072 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000475802s
	[INFO] 10.244.0.27:53032 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000175396s
	
	
	==> describe nodes <==
	Name:               addons-881745
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-881745
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=addons-881745
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T13_12_54_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-881745
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:12:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-881745
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:20:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:17:59 +0000   Mon, 29 Jul 2024 13:12:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:17:59 +0000   Mon, 29 Jul 2024 13:12:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:17:59 +0000   Mon, 29 Jul 2024 13:12:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:17:59 +0000   Mon, 29 Jul 2024 13:12:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.103
	  Hostname:    addons-881745
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 0a5e6478103449d8ba6fc5473dbf0772
	  System UUID:                0a5e6478-1034-49d8-ba6f-c5473dbf0772
	  Boot ID:                    b717f393-ac51-46d7-8b5d-796bba867582
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	  default                     hello-world-app-6778b5fc9f-52v9x         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m54s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 coredns-7db6d8ff4d-bdkkm                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     7m35s
	  kube-system                 etcd-addons-881745                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         7m50s
	  kube-system                 kube-apiserver-addons-881745             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m50s
	  kube-system                 kube-controller-manager-addons-881745    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m49s
	  kube-system                 kube-proxy-6h84v                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  kube-system                 kube-scheduler-addons-881745             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m50s
	  kube-system                 metrics-server-c59844bb4-5nbcm           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         7m30s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m33s  kube-proxy       
	  Normal  Starting                 7m49s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m49s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m49s  kubelet          Node addons-881745 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m49s  kubelet          Node addons-881745 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m49s  kubelet          Node addons-881745 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m48s  kubelet          Node addons-881745 status is now: NodeReady
	  Normal  RegisteredNode           7m36s  node-controller  Node addons-881745 event: Registered Node addons-881745 in Controller
	
	
	==> dmesg <==
	[  +5.006184] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.291589] kauditd_printk_skb: 102 callbacks suppressed
	[  +9.171223] kauditd_printk_skb: 5 callbacks suppressed
	[ +10.374865] kauditd_printk_skb: 6 callbacks suppressed
	[ +16.488589] kauditd_printk_skb: 28 callbacks suppressed
	[Jul29 13:14] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.191742] kauditd_printk_skb: 65 callbacks suppressed
	[  +6.730585] kauditd_printk_skb: 36 callbacks suppressed
	[  +6.099425] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.021612] kauditd_printk_skb: 29 callbacks suppressed
	[ +10.797226] kauditd_printk_skb: 30 callbacks suppressed
	[ +13.871161] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.182947] kauditd_printk_skb: 25 callbacks suppressed
	[Jul29 13:15] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.935504] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.330593] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.475228] kauditd_printk_skb: 6 callbacks suppressed
	[ +15.357543] kauditd_printk_skb: 13 callbacks suppressed
	[ +11.350239] kauditd_printk_skb: 15 callbacks suppressed
	[  +7.838927] kauditd_printk_skb: 33 callbacks suppressed
	[Jul29 13:16] kauditd_printk_skb: 34 callbacks suppressed
	[  +7.772190] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.942873] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.054606] kauditd_printk_skb: 16 callbacks suppressed
	[Jul29 13:17] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [f925986f1ac40320f580812ec0670960f90aef7f94cbe4cdcecac176ec9d3ba8] <==
	{"level":"info","ts":"2024-07-29T13:13:57.69663Z","caller":"traceutil/trace.go:171","msg":"trace[1029002797] transaction","detail":"{read_only:false; response_revision:957; number_of_response:1; }","duration":"205.056974ms","start":"2024-07-29T13:13:57.491559Z","end":"2024-07-29T13:13:57.696616Z","steps":["trace[1029002797] 'process raft request'  (duration: 204.957065ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:14:02.150723Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"280.436551ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14077"}
	{"level":"info","ts":"2024-07-29T13:14:02.150809Z","caller":"traceutil/trace.go:171","msg":"trace[1373360230] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:973; }","duration":"280.545067ms","start":"2024-07-29T13:14:01.87024Z","end":"2024-07-29T13:14:02.150785Z","steps":["trace[1373360230] 'range keys from in-memory index tree'  (duration: 280.204223ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:14:02.151045Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"331.938543ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T13:14:02.15107Z","caller":"traceutil/trace.go:171","msg":"trace[1781667686] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:973; }","duration":"331.984624ms","start":"2024-07-29T13:14:01.819076Z","end":"2024-07-29T13:14:02.15106Z","steps":["trace[1781667686] 'range keys from in-memory index tree'  (duration: 331.902867ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:14:02.151086Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T13:14:01.819062Z","time spent":"332.02052ms","remote":"127.0.0.1:35970","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-07-29T13:14:17.683248Z","caller":"traceutil/trace.go:171","msg":"trace[1343931408] transaction","detail":"{read_only:false; response_revision:1105; number_of_response:1; }","duration":"219.092595ms","start":"2024-07-29T13:14:17.464087Z","end":"2024-07-29T13:14:17.683179Z","steps":["trace[1343931408] 'process raft request'  (duration: 218.713619ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T13:14:23.480985Z","caller":"traceutil/trace.go:171","msg":"trace[1021386678] linearizableReadLoop","detail":"{readStateIndex:1155; appliedIndex:1154; }","duration":"300.379644ms","start":"2024-07-29T13:14:23.180593Z","end":"2024-07-29T13:14:23.480973Z","steps":["trace[1021386678] 'read index received'  (duration: 300.202036ms)","trace[1021386678] 'applied index is now lower than readState.Index'  (duration: 177.194µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T13:14:23.481196Z","caller":"traceutil/trace.go:171","msg":"trace[2130197719] transaction","detail":"{read_only:false; response_revision:1120; number_of_response:1; }","duration":"359.298556ms","start":"2024-07-29T13:14:23.121887Z","end":"2024-07-29T13:14:23.481186Z","steps":["trace[2130197719] 'process raft request'  (duration: 358.951144ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:14:23.481282Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T13:14:23.121869Z","time spent":"359.352242ms","remote":"127.0.0.1:36216","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-bbeot3uwmws2c5asxatwkevkx4\" mod_revision:1070 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-bbeot3uwmws2c5asxatwkevkx4\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-bbeot3uwmws2c5asxatwkevkx4\" > >"}
	{"level":"warn","ts":"2024-07-29T13:14:23.481578Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"300.971484ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-07-29T13:14:23.481626Z","caller":"traceutil/trace.go:171","msg":"trace[1949183600] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1120; }","duration":"301.049246ms","start":"2024-07-29T13:14:23.18057Z","end":"2024-07-29T13:14:23.481619Z","steps":["trace[1949183600] 'agreement among raft nodes before linearized reading'  (duration: 300.92207ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:14:23.481649Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T13:14:23.180558Z","time spent":"301.085724ms","remote":"127.0.0.1:36160","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11476,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-07-29T13:14:23.482179Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.432944ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-07-29T13:14:23.482206Z","caller":"traceutil/trace.go:171","msg":"trace[669980208] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1120; }","duration":"113.503204ms","start":"2024-07-29T13:14:23.368695Z","end":"2024-07-29T13:14:23.482199Z","steps":["trace[669980208] 'agreement among raft nodes before linearized reading'  (duration: 113.408179ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T13:14:26.108411Z","caller":"traceutil/trace.go:171","msg":"trace[1408556459] linearizableReadLoop","detail":"{readStateIndex:1161; appliedIndex:1160; }","duration":"239.843161ms","start":"2024-07-29T13:14:25.868552Z","end":"2024-07-29T13:14:26.108395Z","steps":["trace[1408556459] 'read index received'  (duration: 239.558095ms)","trace[1408556459] 'applied index is now lower than readState.Index'  (duration: 284.482µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T13:14:26.108646Z","caller":"traceutil/trace.go:171","msg":"trace[707371727] transaction","detail":"{read_only:false; response_revision:1126; number_of_response:1; }","duration":"270.733193ms","start":"2024-07-29T13:14:25.837897Z","end":"2024-07-29T13:14:26.10863Z","steps":["trace[707371727] 'process raft request'  (duration: 270.339316ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:14:26.109629Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"241.071853ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-07-29T13:14:26.1106Z","caller":"traceutil/trace.go:171","msg":"trace[959854237] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1126; }","duration":"242.057142ms","start":"2024-07-29T13:14:25.868532Z","end":"2024-07-29T13:14:26.110589Z","steps":["trace[959854237] 'agreement among raft nodes before linearized reading'  (duration: 240.046814ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T13:15:16.623674Z","caller":"traceutil/trace.go:171","msg":"trace[1473155785] linearizableReadLoop","detail":"{readStateIndex:1507; appliedIndex:1506; }","duration":"264.895847ms","start":"2024-07-29T13:15:16.358742Z","end":"2024-07-29T13:15:16.623638Z","steps":["trace[1473155785] 'read index received'  (duration: 264.75084ms)","trace[1473155785] 'applied index is now lower than readState.Index'  (duration: 144.478µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T13:15:16.623949Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.131855ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-07-29T13:15:16.623977Z","caller":"traceutil/trace.go:171","msg":"trace[2087636275] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1457; }","duration":"265.250517ms","start":"2024-07-29T13:15:16.358717Z","end":"2024-07-29T13:15:16.623968Z","steps":["trace[2087636275] 'agreement among raft nodes before linearized reading'  (duration: 265.048614ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T13:15:16.62419Z","caller":"traceutil/trace.go:171","msg":"trace[2017358641] transaction","detail":"{read_only:false; response_revision:1457; number_of_response:1; }","duration":"370.893701ms","start":"2024-07-29T13:15:16.253225Z","end":"2024-07-29T13:15:16.624119Z","steps":["trace[2017358641] 'process raft request'  (duration: 370.312955ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:15:16.624419Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T13:15:16.253207Z","time spent":"371.022584ms","remote":"127.0.0.1:36216","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":540,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-881745\" mod_revision:1362 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-881745\" value_size:486 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-881745\" > >"}
	{"level":"info","ts":"2024-07-29T13:16:33.312023Z","caller":"traceutil/trace.go:171","msg":"trace[854009986] transaction","detail":"{read_only:false; response_revision:1923; number_of_response:1; }","duration":"216.698706ms","start":"2024-07-29T13:16:33.095298Z","end":"2024-07-29T13:16:33.311997Z","steps":["trace[854009986] 'process raft request'  (duration: 216.610463ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:20:42 up 8 min,  0 users,  load average: 0.43, 0.79, 0.53
	Linux addons-881745 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ecaab0389284a5611a2f3f280328a29c744e39a1101a6f90291b6b64a626591e] <==
	E0729 13:14:53.612845       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.73.95:443/apis/metrics.k8s.io/v1beta1: Get "https://10.106.73.95:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.106.73.95:443: connect: connection refused
	I0729 13:14:53.682582       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0729 13:15:21.749260       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	E0729 13:15:22.639670       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	W0729 13:15:22.811221       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0729 13:15:23.272706       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0729 13:15:27.266602       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0729 13:15:27.445143       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.253.94"}
	I0729 13:15:57.220713       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 13:15:57.220899       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 13:15:57.254596       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 13:15:57.254662       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 13:15:57.276134       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 13:15:57.276203       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 13:15:57.316264       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 13:15:57.316386       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 13:15:57.370217       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 13:15:57.370241       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 13:15:57.902921       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.245.40"}
	W0729 13:15:58.318157       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0729 13:15:58.370735       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0729 13:15:58.380218       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0729 13:16:15.405950       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.103:8443->10.244.0.32:36038: read: connection reset by peer
	I0729 13:17:48.412840       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.139.115"}
	E0729 13:17:49.912031       1 watch.go:250] http2: stream closed
	
	
	==> kube-controller-manager [8b6ec32f0b2844f9f32d047fe7c0558f2ef70335b8bb8412a959550fa92a02c0] <==
	W0729 13:18:29.080820       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 13:18:29.080958       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 13:18:43.454480       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 13:18:43.454530       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 13:18:44.117447       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 13:18:44.117542       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 13:18:50.183234       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 13:18:50.183355       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 13:19:21.479630       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 13:19:21.479730       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 13:19:24.616257       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 13:19:24.616426       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 13:19:28.122410       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 13:19:28.122463       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 13:19:36.442956       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 13:19:36.443084       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 13:20:05.123464       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 13:20:05.123619       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 13:20:13.397090       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 13:20:13.397162       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 13:20:24.105415       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 13:20:24.105474       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 13:20:30.206101       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 13:20:30.206201       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0729 13:20:40.927143       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="10.767µs"
	
	
	==> kube-proxy [45d6021d310a9b663fc97ab571c53fcea84ad5fd8a261a58c4d4dc6c7b7e4e37] <==
	I0729 13:13:08.803380       1 server_linux.go:69] "Using iptables proxy"
	I0729 13:13:08.840503       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.103"]
	I0729 13:13:08.936202       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 13:13:08.936258       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 13:13:08.936275       1 server_linux.go:165] "Using iptables Proxier"
	I0729 13:13:08.939384       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 13:13:08.939573       1 server.go:872] "Version info" version="v1.30.3"
	I0729 13:13:08.939604       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:13:08.941034       1 config.go:192] "Starting service config controller"
	I0729 13:13:08.941068       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 13:13:08.941094       1 config.go:101] "Starting endpoint slice config controller"
	I0729 13:13:08.941098       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 13:13:08.941639       1 config.go:319] "Starting node config controller"
	I0729 13:13:08.941668       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 13:13:09.041451       1 shared_informer.go:320] Caches are synced for service config
	I0729 13:13:09.041466       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 13:13:09.043253       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8b62cddfa98d32bab6392002bf26a28a2c77262b2bc184c1c946a4fa296c1423] <==
	E0729 13:12:50.960489       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 13:12:50.960684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 13:12:50.960778       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 13:12:50.963141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 13:12:51.818777       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 13:12:51.818824       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 13:12:51.824985       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 13:12:51.825059       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 13:12:51.831160       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 13:12:51.831220       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 13:12:51.904670       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 13:12:51.904718       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 13:12:51.914304       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 13:12:51.914386       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 13:12:51.926485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 13:12:51.926526       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 13:12:51.992542       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 13:12:51.992589       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 13:12:52.004912       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 13:12:52.004966       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 13:12:52.019161       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 13:12:52.019207       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 13:12:52.044210       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 13:12:52.044298       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0729 13:12:53.852153       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 13:17:54 addons-881745 kubelet[1267]: I0729 13:17:54.874682    1267 scope.go:117] "RemoveContainer" containerID="07fc855342932a22624138ff357f18546a242db912116e0725ff5c21462214aa"
	Jul 29 13:17:54 addons-881745 kubelet[1267]: I0729 13:17:54.899342    1267 scope.go:117] "RemoveContainer" containerID="3992fd665362212919fa33098f1a4b914227e4a9cf51702a1da0974768fa5d30"
	Jul 29 13:17:55 addons-881745 kubelet[1267]: I0729 13:17:55.298865    1267 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cdccdd9-4b11-4ec3-935d-c1e1bcd1eb6f" path="/var/lib/kubelet/pods/7cdccdd9-4b11-4ec3-935d-c1e1bcd1eb6f/volumes"
	Jul 29 13:18:42 addons-881745 kubelet[1267]: I0729 13:18:42.294651    1267 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 29 13:18:53 addons-881745 kubelet[1267]: E0729 13:18:53.316879    1267 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:18:53 addons-881745 kubelet[1267]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:18:53 addons-881745 kubelet[1267]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:18:53 addons-881745 kubelet[1267]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:18:53 addons-881745 kubelet[1267]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:19:53 addons-881745 kubelet[1267]: E0729 13:19:53.320344    1267 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:19:53 addons-881745 kubelet[1267]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:19:53 addons-881745 kubelet[1267]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:19:53 addons-881745 kubelet[1267]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:19:53 addons-881745 kubelet[1267]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:20:02 addons-881745 kubelet[1267]: I0729 13:20:02.294454    1267 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 29 13:20:42 addons-881745 kubelet[1267]: I0729 13:20:42.290560    1267 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c04ce8b2-943f-4d1d-afd0-7e1d7d17e36f-tmp-dir\") pod \"c04ce8b2-943f-4d1d-afd0-7e1d7d17e36f\" (UID: \"c04ce8b2-943f-4d1d-afd0-7e1d7d17e36f\") "
	Jul 29 13:20:42 addons-881745 kubelet[1267]: I0729 13:20:42.290616    1267 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tgnl\" (UniqueName: \"kubernetes.io/projected/c04ce8b2-943f-4d1d-afd0-7e1d7d17e36f-kube-api-access-8tgnl\") pod \"c04ce8b2-943f-4d1d-afd0-7e1d7d17e36f\" (UID: \"c04ce8b2-943f-4d1d-afd0-7e1d7d17e36f\") "
	Jul 29 13:20:42 addons-881745 kubelet[1267]: I0729 13:20:42.291795    1267 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c04ce8b2-943f-4d1d-afd0-7e1d7d17e36f-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c04ce8b2-943f-4d1d-afd0-7e1d7d17e36f" (UID: "c04ce8b2-943f-4d1d-afd0-7e1d7d17e36f"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 29 13:20:42 addons-881745 kubelet[1267]: I0729 13:20:42.301563    1267 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c04ce8b2-943f-4d1d-afd0-7e1d7d17e36f-kube-api-access-8tgnl" (OuterVolumeSpecName: "kube-api-access-8tgnl") pod "c04ce8b2-943f-4d1d-afd0-7e1d7d17e36f" (UID: "c04ce8b2-943f-4d1d-afd0-7e1d7d17e36f"). InnerVolumeSpecName "kube-api-access-8tgnl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 29 13:20:42 addons-881745 kubelet[1267]: I0729 13:20:42.391542    1267 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c04ce8b2-943f-4d1d-afd0-7e1d7d17e36f-tmp-dir\") on node \"addons-881745\" DevicePath \"\""
	Jul 29 13:20:42 addons-881745 kubelet[1267]: I0729 13:20:42.391571    1267 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-8tgnl\" (UniqueName: \"kubernetes.io/projected/c04ce8b2-943f-4d1d-afd0-7e1d7d17e36f-kube-api-access-8tgnl\") on node \"addons-881745\" DevicePath \"\""
	Jul 29 13:20:42 addons-881745 kubelet[1267]: I0729 13:20:42.505376    1267 scope.go:117] "RemoveContainer" containerID="47676082808be0445d4754f760e2bdb551f8ee495159616efc180e768fb90437"
	Jul 29 13:20:42 addons-881745 kubelet[1267]: I0729 13:20:42.538863    1267 scope.go:117] "RemoveContainer" containerID="47676082808be0445d4754f760e2bdb551f8ee495159616efc180e768fb90437"
	Jul 29 13:20:42 addons-881745 kubelet[1267]: E0729 13:20:42.541474    1267 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47676082808be0445d4754f760e2bdb551f8ee495159616efc180e768fb90437\": container with ID starting with 47676082808be0445d4754f760e2bdb551f8ee495159616efc180e768fb90437 not found: ID does not exist" containerID="47676082808be0445d4754f760e2bdb551f8ee495159616efc180e768fb90437"
	Jul 29 13:20:42 addons-881745 kubelet[1267]: I0729 13:20:42.541529    1267 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47676082808be0445d4754f760e2bdb551f8ee495159616efc180e768fb90437"} err="failed to get container status \"47676082808be0445d4754f760e2bdb551f8ee495159616efc180e768fb90437\": rpc error: code = NotFound desc = could not find container \"47676082808be0445d4754f760e2bdb551f8ee495159616efc180e768fb90437\": container with ID starting with 47676082808be0445d4754f760e2bdb551f8ee495159616efc180e768fb90437 not found: ID does not exist"
	
	
	==> storage-provisioner [5a9c7c2c07dfe1388af48c648ad63186b6ffd23d651ffa0e698cd5c45c04b5d7] <==
	I0729 13:13:15.947881       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 13:13:15.968012       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 13:13:15.968210       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 13:13:15.983759       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 13:13:15.984899       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"78a26d8d-8c16-4971-9e86-a7a7d4f19145", APIVersion:"v1", ResourceVersion:"716", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-881745_aa192ac5-bb3a-4e9d-bba3-13da7fa7b8e2 became leader
	I0729 13:13:15.985405       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-881745_aa192ac5-bb3a-4e9d-bba3-13da7fa7b8e2!
	I0729 13:13:16.086137       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-881745_aa192ac5-bb3a-4e9d-bba3-13da7fa7b8e2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-881745 -n addons-881745
helpers_test.go:261: (dbg) Run:  kubectl --context addons-881745 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (344.24s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.45s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-881745
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-881745: exit status 82 (2m0.467037432s)

                                                
                                                
-- stdout --
	* Stopping node "addons-881745"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-881745" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-881745
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-881745: exit status 11 (21.691491792s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-881745" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-881745
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-881745: exit status 11 (6.144903092s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-881745" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-881745
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-881745: exit status 11 (6.143207128s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-881745" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 node stop m02 -v=7 --alsologtostderr
E0729 13:32:27.149288  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
E0729 13:32:47.629547  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
E0729 13:33:28.589998  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-104111 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.470552216s)

                                                
                                                
-- stdout --
	* Stopping node "ha-104111-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:32:26.676296  996888 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:32:26.676580  996888 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:32:26.676589  996888 out.go:304] Setting ErrFile to fd 2...
	I0729 13:32:26.676593  996888 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:32:26.676828  996888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 13:32:26.677179  996888 mustload.go:65] Loading cluster: ha-104111
	I0729 13:32:26.677740  996888 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:32:26.677771  996888 stop.go:39] StopHost: ha-104111-m02
	I0729 13:32:26.678214  996888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:32:26.678266  996888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:32:26.694785  996888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33529
	I0729 13:32:26.695408  996888 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:32:26.696089  996888 main.go:141] libmachine: Using API Version  1
	I0729 13:32:26.696113  996888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:32:26.696484  996888 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:32:26.698886  996888 out.go:177] * Stopping node "ha-104111-m02"  ...
	I0729 13:32:26.700226  996888 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 13:32:26.700273  996888 main.go:141] libmachine: (ha-104111-m02) Calling .DriverName
	I0729 13:32:26.700565  996888 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 13:32:26.700603  996888 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:32:26.703484  996888 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:32:26.703979  996888 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:32:26.704007  996888 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:32:26.704191  996888 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:32:26.704394  996888 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:32:26.704605  996888 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:32:26.704788  996888 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/id_rsa Username:docker}
	I0729 13:32:26.795774  996888 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 13:32:26.848742  996888 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 13:32:26.904517  996888 main.go:141] libmachine: Stopping "ha-104111-m02"...
	I0729 13:32:26.904568  996888 main.go:141] libmachine: (ha-104111-m02) Calling .GetState
	I0729 13:32:26.906251  996888 main.go:141] libmachine: (ha-104111-m02) Calling .Stop
	I0729 13:32:26.909775  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 0/120
	I0729 13:32:27.911662  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 1/120
	I0729 13:32:28.912907  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 2/120
	I0729 13:32:29.914837  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 3/120
	I0729 13:32:30.916569  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 4/120
	I0729 13:32:31.918589  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 5/120
	I0729 13:32:32.920005  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 6/120
	I0729 13:32:33.921571  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 7/120
	I0729 13:32:34.923056  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 8/120
	I0729 13:32:35.924454  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 9/120
	I0729 13:32:36.926640  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 10/120
	I0729 13:32:37.927975  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 11/120
	I0729 13:32:38.929344  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 12/120
	I0729 13:32:39.930812  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 13/120
	I0729 13:32:40.932584  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 14/120
	I0729 13:32:41.934681  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 15/120
	I0729 13:32:42.936793  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 16/120
	I0729 13:32:43.938150  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 17/120
	I0729 13:32:44.939500  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 18/120
	I0729 13:32:45.941197  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 19/120
	I0729 13:32:46.943654  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 20/120
	I0729 13:32:47.945191  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 21/120
	I0729 13:32:48.946896  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 22/120
	I0729 13:32:49.948241  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 23/120
	I0729 13:32:50.950110  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 24/120
	I0729 13:32:51.951430  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 25/120
	I0729 13:32:52.953452  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 26/120
	I0729 13:32:53.954913  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 27/120
	I0729 13:32:54.956348  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 28/120
	I0729 13:32:55.957481  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 29/120
	I0729 13:32:56.959645  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 30/120
	I0729 13:32:57.960991  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 31/120
	I0729 13:32:58.963026  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 32/120
	I0729 13:32:59.964278  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 33/120
	I0729 13:33:00.965635  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 34/120
	I0729 13:33:01.967089  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 35/120
	I0729 13:33:02.968403  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 36/120
	I0729 13:33:03.969808  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 37/120
	I0729 13:33:04.971246  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 38/120
	I0729 13:33:05.972778  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 39/120
	I0729 13:33:06.974927  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 40/120
	I0729 13:33:07.976695  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 41/120
	I0729 13:33:08.978951  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 42/120
	I0729 13:33:09.981078  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 43/120
	I0729 13:33:10.982930  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 44/120
	I0729 13:33:11.984404  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 45/120
	I0729 13:33:12.985745  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 46/120
	I0729 13:33:13.986991  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 47/120
	I0729 13:33:14.988369  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 48/120
	I0729 13:33:15.989705  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 49/120
	I0729 13:33:16.991881  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 50/120
	I0729 13:33:17.993253  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 51/120
	I0729 13:33:18.995114  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 52/120
	I0729 13:33:19.996303  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 53/120
	I0729 13:33:20.997712  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 54/120
	I0729 13:33:22.000023  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 55/120
	I0729 13:33:23.001337  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 56/120
	I0729 13:33:24.002848  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 57/120
	I0729 13:33:25.004310  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 58/120
	I0729 13:33:26.005768  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 59/120
	I0729 13:33:27.007414  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 60/120
	I0729 13:33:28.009535  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 61/120
	I0729 13:33:29.010959  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 62/120
	I0729 13:33:30.013149  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 63/120
	I0729 13:33:31.015009  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 64/120
	I0729 13:33:32.016554  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 65/120
	I0729 13:33:33.018215  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 66/120
	I0729 13:33:34.020033  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 67/120
	I0729 13:33:35.021563  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 68/120
	I0729 13:33:36.022789  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 69/120
	I0729 13:33:37.024667  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 70/120
	I0729 13:33:38.027009  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 71/120
	I0729 13:33:39.028255  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 72/120
	I0729 13:33:40.030426  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 73/120
	I0729 13:33:41.031871  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 74/120
	I0729 13:33:42.033522  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 75/120
	I0729 13:33:43.034910  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 76/120
	I0729 13:33:44.036251  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 77/120
	I0729 13:33:45.037603  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 78/120
	I0729 13:33:46.038819  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 79/120
	I0729 13:33:47.040291  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 80/120
	I0729 13:33:48.042167  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 81/120
	I0729 13:33:49.043583  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 82/120
	I0729 13:33:50.044919  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 83/120
	I0729 13:33:51.046877  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 84/120
	I0729 13:33:52.048825  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 85/120
	I0729 13:33:53.050062  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 86/120
	I0729 13:33:54.051364  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 87/120
	I0729 13:33:55.052519  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 88/120
	I0729 13:33:56.053724  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 89/120
	I0729 13:33:57.055771  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 90/120
	I0729 13:33:58.057156  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 91/120
	I0729 13:33:59.058879  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 92/120
	I0729 13:34:00.060293  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 93/120
	I0729 13:34:01.061495  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 94/120
	I0729 13:34:02.063311  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 95/120
	I0729 13:34:03.064795  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 96/120
	I0729 13:34:04.066469  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 97/120
	I0729 13:34:05.067779  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 98/120
	I0729 13:34:06.069084  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 99/120
	I0729 13:34:07.070751  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 100/120
	I0729 13:34:08.072053  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 101/120
	I0729 13:34:09.073292  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 102/120
	I0729 13:34:10.074841  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 103/120
	I0729 13:34:11.076160  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 104/120
	I0729 13:34:12.077997  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 105/120
	I0729 13:34:13.079504  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 106/120
	I0729 13:34:14.081065  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 107/120
	I0729 13:34:15.083425  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 108/120
	I0729 13:34:16.084698  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 109/120
	I0729 13:34:17.086890  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 110/120
	I0729 13:34:18.088259  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 111/120
	I0729 13:34:19.089723  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 112/120
	I0729 13:34:20.091109  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 113/120
	I0729 13:34:21.093459  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 114/120
	I0729 13:34:22.094846  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 115/120
	I0729 13:34:23.096346  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 116/120
	I0729 13:34:24.097824  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 117/120
	I0729 13:34:25.099122  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 118/120
	I0729 13:34:26.100551  996888 main.go:141] libmachine: (ha-104111-m02) Waiting for machine to stop 119/120
	I0729 13:34:27.101545  996888 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 13:34:27.101736  996888 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-104111 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 status -v=7 --alsologtostderr
E0729 13:34:30.664578  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-104111 status -v=7 --alsologtostderr: exit status 3 (19.220427279s)

                                                
                                                
-- stdout --
	ha-104111
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-104111-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-104111-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-104111-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:34:27.147765  997310 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:34:27.148075  997310 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:34:27.148086  997310 out.go:304] Setting ErrFile to fd 2...
	I0729 13:34:27.148091  997310 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:34:27.148286  997310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 13:34:27.148487  997310 out.go:298] Setting JSON to false
	I0729 13:34:27.148513  997310 mustload.go:65] Loading cluster: ha-104111
	I0729 13:34:27.148556  997310 notify.go:220] Checking for updates...
	I0729 13:34:27.148956  997310 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:34:27.148978  997310 status.go:255] checking status of ha-104111 ...
	I0729 13:34:27.149570  997310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:27.149636  997310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:27.166270  997310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42593
	I0729 13:34:27.166909  997310 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:27.167511  997310 main.go:141] libmachine: Using API Version  1
	I0729 13:34:27.167535  997310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:27.167960  997310 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:27.168214  997310 main.go:141] libmachine: (ha-104111) Calling .GetState
	I0729 13:34:27.170158  997310 status.go:330] ha-104111 host status = "Running" (err=<nil>)
	I0729 13:34:27.170177  997310 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:34:27.170474  997310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:27.170516  997310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:27.185774  997310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37099
	I0729 13:34:27.186241  997310 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:27.186737  997310 main.go:141] libmachine: Using API Version  1
	I0729 13:34:27.186757  997310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:27.187044  997310 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:27.187220  997310 main.go:141] libmachine: (ha-104111) Calling .GetIP
	I0729 13:34:27.189771  997310 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:34:27.190189  997310 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:34:27.190219  997310 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:34:27.190364  997310 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:34:27.190722  997310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:27.190781  997310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:27.206229  997310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39973
	I0729 13:34:27.206739  997310 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:27.207470  997310 main.go:141] libmachine: Using API Version  1
	I0729 13:34:27.207494  997310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:27.207856  997310 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:27.207988  997310 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:34:27.208192  997310 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:34:27.208219  997310 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:34:27.210808  997310 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:34:27.211206  997310 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:34:27.211229  997310 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:34:27.211361  997310 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:34:27.211539  997310 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:34:27.211695  997310 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:34:27.211898  997310 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:34:27.301389  997310 ssh_runner.go:195] Run: systemctl --version
	I0729 13:34:27.308173  997310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:34:27.325333  997310 kubeconfig.go:125] found "ha-104111" server: "https://192.168.39.254:8443"
	I0729 13:34:27.325370  997310 api_server.go:166] Checking apiserver status ...
	I0729 13:34:27.325406  997310 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:34:27.340931  997310 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup
	W0729 13:34:27.351975  997310 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:34:27.352051  997310 ssh_runner.go:195] Run: ls
	I0729 13:34:27.356783  997310 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 13:34:27.362743  997310 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 13:34:27.362773  997310 status.go:422] ha-104111 apiserver status = Running (err=<nil>)
	I0729 13:34:27.362783  997310 status.go:257] ha-104111 status: &{Name:ha-104111 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:34:27.362807  997310 status.go:255] checking status of ha-104111-m02 ...
	I0729 13:34:27.363222  997310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:27.363298  997310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:27.379467  997310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36113
	I0729 13:34:27.379999  997310 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:27.380582  997310 main.go:141] libmachine: Using API Version  1
	I0729 13:34:27.380606  997310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:27.381009  997310 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:27.381251  997310 main.go:141] libmachine: (ha-104111-m02) Calling .GetState
	I0729 13:34:27.382932  997310 status.go:330] ha-104111-m02 host status = "Running" (err=<nil>)
	I0729 13:34:27.382948  997310 host.go:66] Checking if "ha-104111-m02" exists ...
	I0729 13:34:27.383309  997310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:27.383346  997310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:27.399630  997310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41435
	I0729 13:34:27.400109  997310 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:27.400601  997310 main.go:141] libmachine: Using API Version  1
	I0729 13:34:27.400628  997310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:27.401004  997310 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:27.401203  997310 main.go:141] libmachine: (ha-104111-m02) Calling .GetIP
	I0729 13:34:27.404047  997310 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:34:27.404518  997310 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:34:27.404542  997310 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:34:27.404669  997310 host.go:66] Checking if "ha-104111-m02" exists ...
	I0729 13:34:27.404980  997310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:27.405026  997310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:27.419445  997310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33491
	I0729 13:34:27.419862  997310 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:27.420345  997310 main.go:141] libmachine: Using API Version  1
	I0729 13:34:27.420371  997310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:27.420736  997310 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:27.420966  997310 main.go:141] libmachine: (ha-104111-m02) Calling .DriverName
	I0729 13:34:27.421162  997310 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:34:27.421183  997310 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:34:27.423762  997310 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:34:27.424124  997310 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:34:27.424142  997310 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:34:27.424283  997310 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:34:27.424473  997310 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:34:27.424611  997310 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:34:27.424736  997310 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/id_rsa Username:docker}
	W0729 13:34:45.956618  997310 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.140:22: connect: no route to host
	W0729 13:34:45.956730  997310 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	E0729 13:34:45.956748  997310 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	I0729 13:34:45.956756  997310 status.go:257] ha-104111-m02 status: &{Name:ha-104111-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 13:34:45.956780  997310 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	I0729 13:34:45.956787  997310 status.go:255] checking status of ha-104111-m03 ...
	I0729 13:34:45.957109  997310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:45.957153  997310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:45.972655  997310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33233
	I0729 13:34:45.973197  997310 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:45.973725  997310 main.go:141] libmachine: Using API Version  1
	I0729 13:34:45.973755  997310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:45.974056  997310 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:45.974224  997310 main.go:141] libmachine: (ha-104111-m03) Calling .GetState
	I0729 13:34:45.975601  997310 status.go:330] ha-104111-m03 host status = "Running" (err=<nil>)
	I0729 13:34:45.975621  997310 host.go:66] Checking if "ha-104111-m03" exists ...
	I0729 13:34:45.975951  997310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:45.975996  997310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:45.992183  997310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33017
	I0729 13:34:45.992643  997310 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:45.993145  997310 main.go:141] libmachine: Using API Version  1
	I0729 13:34:45.993165  997310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:45.993525  997310 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:45.993728  997310 main.go:141] libmachine: (ha-104111-m03) Calling .GetIP
	I0729 13:34:45.996293  997310 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:34:45.996740  997310 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:34:45.996766  997310 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:34:45.996880  997310 host.go:66] Checking if "ha-104111-m03" exists ...
	I0729 13:34:45.997300  997310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:45.997343  997310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:46.012744  997310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35373
	I0729 13:34:46.013111  997310 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:46.013590  997310 main.go:141] libmachine: Using API Version  1
	I0729 13:34:46.013612  997310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:46.013933  997310 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:46.014137  997310 main.go:141] libmachine: (ha-104111-m03) Calling .DriverName
	I0729 13:34:46.014327  997310 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:34:46.014352  997310 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:34:46.016966  997310 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:34:46.017418  997310 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:34:46.017437  997310 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:34:46.017575  997310 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:34:46.017764  997310 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:34:46.017940  997310 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:34:46.018069  997310 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/id_rsa Username:docker}
	I0729 13:34:46.101985  997310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:34:46.121644  997310 kubeconfig.go:125] found "ha-104111" server: "https://192.168.39.254:8443"
	I0729 13:34:46.121680  997310 api_server.go:166] Checking apiserver status ...
	I0729 13:34:46.121717  997310 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:34:46.137929  997310 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1544/cgroup
	W0729 13:34:46.148425  997310 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1544/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:34:46.148488  997310 ssh_runner.go:195] Run: ls
	I0729 13:34:46.152761  997310 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 13:34:46.157263  997310 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 13:34:46.157285  997310 status.go:422] ha-104111-m03 apiserver status = Running (err=<nil>)
	I0729 13:34:46.157293  997310 status.go:257] ha-104111-m03 status: &{Name:ha-104111-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:34:46.157308  997310 status.go:255] checking status of ha-104111-m04 ...
	I0729 13:34:46.157583  997310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:46.157619  997310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:46.172896  997310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45595
	I0729 13:34:46.173327  997310 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:46.173856  997310 main.go:141] libmachine: Using API Version  1
	I0729 13:34:46.173878  997310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:46.174264  997310 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:46.174503  997310 main.go:141] libmachine: (ha-104111-m04) Calling .GetState
	I0729 13:34:46.176429  997310 status.go:330] ha-104111-m04 host status = "Running" (err=<nil>)
	I0729 13:34:46.176450  997310 host.go:66] Checking if "ha-104111-m04" exists ...
	I0729 13:34:46.176783  997310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:46.176842  997310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:46.192060  997310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44457
	I0729 13:34:46.192630  997310 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:46.193129  997310 main.go:141] libmachine: Using API Version  1
	I0729 13:34:46.193150  997310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:46.193480  997310 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:46.193664  997310 main.go:141] libmachine: (ha-104111-m04) Calling .GetIP
	I0729 13:34:46.196182  997310 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:34:46.196651  997310 main.go:141] libmachine: (ha-104111-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:31:bf", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:31:34 +0000 UTC Type:0 Mac:52:54:00:c2:31:bf Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-104111-m04 Clientid:01:52:54:00:c2:31:bf}
	I0729 13:34:46.196682  997310 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:34:46.196842  997310 host.go:66] Checking if "ha-104111-m04" exists ...
	I0729 13:34:46.197152  997310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:46.197187  997310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:46.212003  997310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37957
	I0729 13:34:46.212495  997310 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:46.212942  997310 main.go:141] libmachine: Using API Version  1
	I0729 13:34:46.212972  997310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:46.213275  997310 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:46.213462  997310 main.go:141] libmachine: (ha-104111-m04) Calling .DriverName
	I0729 13:34:46.213651  997310 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:34:46.213683  997310 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHHostname
	I0729 13:34:46.216115  997310 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:34:46.216558  997310 main.go:141] libmachine: (ha-104111-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:31:bf", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:31:34 +0000 UTC Type:0 Mac:52:54:00:c2:31:bf Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-104111-m04 Clientid:01:52:54:00:c2:31:bf}
	I0729 13:34:46.216594  997310 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:34:46.216806  997310 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHPort
	I0729 13:34:46.216969  997310 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHKeyPath
	I0729 13:34:46.217132  997310 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHUsername
	I0729 13:34:46.217297  997310 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m04/id_rsa Username:docker}
	I0729 13:34:46.305580  997310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:34:46.321692  997310 status.go:257] ha-104111-m04 status: &{Name:ha-104111-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-104111 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-104111 -n ha-104111
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-104111 logs -n 25: (1.430801439s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-104111 cp ha-104111-m03:/home/docker/cp-test.txt                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3327814908/001/cp-test_ha-104111-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-104111 cp ha-104111-m03:/home/docker/cp-test.txt                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111:/home/docker/cp-test_ha-104111-m03_ha-104111.txt                       |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n ha-104111 sudo cat                                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-104111-m03_ha-104111.txt                                 |           |         |         |                     |                     |
	| cp      | ha-104111 cp ha-104111-m03:/home/docker/cp-test.txt                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m02:/home/docker/cp-test_ha-104111-m03_ha-104111-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n ha-104111-m02 sudo cat                                          | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-104111-m03_ha-104111-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-104111 cp ha-104111-m03:/home/docker/cp-test.txt                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04:/home/docker/cp-test_ha-104111-m03_ha-104111-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n ha-104111-m04 sudo cat                                          | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-104111-m03_ha-104111-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-104111 cp testdata/cp-test.txt                                                | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-104111 cp ha-104111-m04:/home/docker/cp-test.txt                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3327814908/001/cp-test_ha-104111-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-104111 cp ha-104111-m04:/home/docker/cp-test.txt                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111:/home/docker/cp-test_ha-104111-m04_ha-104111.txt                       |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n ha-104111 sudo cat                                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-104111-m04_ha-104111.txt                                 |           |         |         |                     |                     |
	| cp      | ha-104111 cp ha-104111-m04:/home/docker/cp-test.txt                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m02:/home/docker/cp-test_ha-104111-m04_ha-104111-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n ha-104111-m02 sudo cat                                          | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-104111-m04_ha-104111-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-104111 cp ha-104111-m04:/home/docker/cp-test.txt                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m03:/home/docker/cp-test_ha-104111-m04_ha-104111-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n ha-104111-m03 sudo cat                                          | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-104111-m04_ha-104111-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-104111 node stop m02 -v=7                                                     | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 13:27:50
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 13:27:50.788594  992950 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:27:50.788711  992950 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:27:50.788720  992950 out.go:304] Setting ErrFile to fd 2...
	I0729 13:27:50.788724  992950 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:27:50.788892  992950 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 13:27:50.789445  992950 out.go:298] Setting JSON to false
	I0729 13:27:50.790362  992950 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11423,"bootTime":1722248248,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:27:50.790420  992950 start.go:139] virtualization: kvm guest
	I0729 13:27:50.792605  992950 out.go:177] * [ha-104111] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:27:50.793994  992950 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 13:27:50.793992  992950 notify.go:220] Checking for updates...
	I0729 13:27:50.796557  992950 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:27:50.798040  992950 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 13:27:50.799414  992950 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 13:27:50.800730  992950 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:27:50.802076  992950 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:27:50.803553  992950 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:27:50.838089  992950 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 13:27:50.839245  992950 start.go:297] selected driver: kvm2
	I0729 13:27:50.839259  992950 start.go:901] validating driver "kvm2" against <nil>
	I0729 13:27:50.839275  992950 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:27:50.840234  992950 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:27:50.840342  992950 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19338-974764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 13:27:50.854536  992950 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 13:27:50.854586  992950 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 13:27:50.854795  992950 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:27:50.854863  992950 cni.go:84] Creating CNI manager for ""
	I0729 13:27:50.854876  992950 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 13:27:50.854887  992950 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 13:27:50.854944  992950 start.go:340] cluster config:
	{Name:ha-104111 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-104111 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0729 13:27:50.855039  992950 iso.go:125] acquiring lock: {Name:mk2bc72146110e230952d77b90cad2ea8182c9d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:27:50.857541  992950 out.go:177] * Starting "ha-104111" primary control-plane node in "ha-104111" cluster
	I0729 13:27:50.858759  992950 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:27:50.858788  992950 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 13:27:50.858798  992950 cache.go:56] Caching tarball of preloaded images
	I0729 13:27:50.858894  992950 preload.go:172] Found /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:27:50.858909  992950 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 13:27:50.859226  992950 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/config.json ...
	I0729 13:27:50.859248  992950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/config.json: {Name:mk83cae594c7e4085d286e1d9eb5152c87251bd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:27:50.859395  992950 start.go:360] acquireMachinesLock for ha-104111: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:27:50.859428  992950 start.go:364] duration metric: took 17.801µs to acquireMachinesLock for "ha-104111"
	I0729 13:27:50.859457  992950 start.go:93] Provisioning new machine with config: &{Name:ha-104111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-104111 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:27:50.859519  992950 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 13:27:50.861740  992950 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 13:27:50.861869  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:27:50.861907  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:27:50.875832  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39293
	I0729 13:27:50.876276  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:27:50.876873  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:27:50.876895  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:27:50.877289  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:27:50.877499  992950 main.go:141] libmachine: (ha-104111) Calling .GetMachineName
	I0729 13:27:50.877719  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:27:50.877902  992950 start.go:159] libmachine.API.Create for "ha-104111" (driver="kvm2")
	I0729 13:27:50.877929  992950 client.go:168] LocalClient.Create starting
	I0729 13:27:50.877972  992950 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem
	I0729 13:27:50.878009  992950 main.go:141] libmachine: Decoding PEM data...
	I0729 13:27:50.878031  992950 main.go:141] libmachine: Parsing certificate...
	I0729 13:27:50.878092  992950 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem
	I0729 13:27:50.878110  992950 main.go:141] libmachine: Decoding PEM data...
	I0729 13:27:50.878122  992950 main.go:141] libmachine: Parsing certificate...
	I0729 13:27:50.878137  992950 main.go:141] libmachine: Running pre-create checks...
	I0729 13:27:50.878150  992950 main.go:141] libmachine: (ha-104111) Calling .PreCreateCheck
	I0729 13:27:50.878538  992950 main.go:141] libmachine: (ha-104111) Calling .GetConfigRaw
	I0729 13:27:50.878889  992950 main.go:141] libmachine: Creating machine...
	I0729 13:27:50.878901  992950 main.go:141] libmachine: (ha-104111) Calling .Create
	I0729 13:27:50.879009  992950 main.go:141] libmachine: (ha-104111) Creating KVM machine...
	I0729 13:27:50.880156  992950 main.go:141] libmachine: (ha-104111) DBG | found existing default KVM network
	I0729 13:27:50.880906  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:50.880790  992973 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015330}
	I0729 13:27:50.880984  992950 main.go:141] libmachine: (ha-104111) DBG | created network xml: 
	I0729 13:27:50.881005  992950 main.go:141] libmachine: (ha-104111) DBG | <network>
	I0729 13:27:50.881017  992950 main.go:141] libmachine: (ha-104111) DBG |   <name>mk-ha-104111</name>
	I0729 13:27:50.881031  992950 main.go:141] libmachine: (ha-104111) DBG |   <dns enable='no'/>
	I0729 13:27:50.881042  992950 main.go:141] libmachine: (ha-104111) DBG |   
	I0729 13:27:50.881053  992950 main.go:141] libmachine: (ha-104111) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 13:27:50.881065  992950 main.go:141] libmachine: (ha-104111) DBG |     <dhcp>
	I0729 13:27:50.881078  992950 main.go:141] libmachine: (ha-104111) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 13:27:50.881090  992950 main.go:141] libmachine: (ha-104111) DBG |     </dhcp>
	I0729 13:27:50.881100  992950 main.go:141] libmachine: (ha-104111) DBG |   </ip>
	I0729 13:27:50.881121  992950 main.go:141] libmachine: (ha-104111) DBG |   
	I0729 13:27:50.881139  992950 main.go:141] libmachine: (ha-104111) DBG | </network>
	I0729 13:27:50.881154  992950 main.go:141] libmachine: (ha-104111) DBG | 
	I0729 13:27:50.885676  992950 main.go:141] libmachine: (ha-104111) DBG | trying to create private KVM network mk-ha-104111 192.168.39.0/24...
	I0729 13:27:50.951493  992950 main.go:141] libmachine: (ha-104111) DBG | private KVM network mk-ha-104111 192.168.39.0/24 created
	I0729 13:27:50.951527  992950 main.go:141] libmachine: (ha-104111) Setting up store path in /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111 ...
	I0729 13:27:50.951542  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:50.951462  992973 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 13:27:50.951555  992950 main.go:141] libmachine: (ha-104111) Building disk image from file:///home/jenkins/minikube-integration/19338-974764/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 13:27:50.951576  992950 main.go:141] libmachine: (ha-104111) Downloading /home/jenkins/minikube-integration/19338-974764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19338-974764/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 13:27:51.230142  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:51.230009  992973 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa...
	I0729 13:27:51.778580  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:51.778412  992973 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/ha-104111.rawdisk...
	I0729 13:27:51.778629  992950 main.go:141] libmachine: (ha-104111) DBG | Writing magic tar header
	I0729 13:27:51.778645  992950 main.go:141] libmachine: (ha-104111) DBG | Writing SSH key tar header
	I0729 13:27:51.778661  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:51.778578  992973 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111 ...
	I0729 13:27:51.778743  992950 main.go:141] libmachine: (ha-104111) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111
	I0729 13:27:51.778771  992950 main.go:141] libmachine: (ha-104111) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube/machines
	I0729 13:27:51.778783  992950 main.go:141] libmachine: (ha-104111) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111 (perms=drwx------)
	I0729 13:27:51.778795  992950 main.go:141] libmachine: (ha-104111) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube/machines (perms=drwxr-xr-x)
	I0729 13:27:51.778808  992950 main.go:141] libmachine: (ha-104111) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube (perms=drwxr-xr-x)
	I0729 13:27:51.778818  992950 main.go:141] libmachine: (ha-104111) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 13:27:51.778834  992950 main.go:141] libmachine: (ha-104111) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764
	I0729 13:27:51.778875  992950 main.go:141] libmachine: (ha-104111) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 13:27:51.778890  992950 main.go:141] libmachine: (ha-104111) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764 (perms=drwxrwxr-x)
	I0729 13:27:51.778907  992950 main.go:141] libmachine: (ha-104111) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 13:27:51.778919  992950 main.go:141] libmachine: (ha-104111) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 13:27:51.778930  992950 main.go:141] libmachine: (ha-104111) DBG | Checking permissions on dir: /home/jenkins
	I0729 13:27:51.778944  992950 main.go:141] libmachine: (ha-104111) DBG | Checking permissions on dir: /home
	I0729 13:27:51.778955  992950 main.go:141] libmachine: (ha-104111) DBG | Skipping /home - not owner
	I0729 13:27:51.778969  992950 main.go:141] libmachine: (ha-104111) Creating domain...
	I0729 13:27:51.780145  992950 main.go:141] libmachine: (ha-104111) define libvirt domain using xml: 
	I0729 13:27:51.780169  992950 main.go:141] libmachine: (ha-104111) <domain type='kvm'>
	I0729 13:27:51.780176  992950 main.go:141] libmachine: (ha-104111)   <name>ha-104111</name>
	I0729 13:27:51.780184  992950 main.go:141] libmachine: (ha-104111)   <memory unit='MiB'>2200</memory>
	I0729 13:27:51.780212  992950 main.go:141] libmachine: (ha-104111)   <vcpu>2</vcpu>
	I0729 13:27:51.780234  992950 main.go:141] libmachine: (ha-104111)   <features>
	I0729 13:27:51.780241  992950 main.go:141] libmachine: (ha-104111)     <acpi/>
	I0729 13:27:51.780248  992950 main.go:141] libmachine: (ha-104111)     <apic/>
	I0729 13:27:51.780275  992950 main.go:141] libmachine: (ha-104111)     <pae/>
	I0729 13:27:51.780296  992950 main.go:141] libmachine: (ha-104111)     
	I0729 13:27:51.780308  992950 main.go:141] libmachine: (ha-104111)   </features>
	I0729 13:27:51.780319  992950 main.go:141] libmachine: (ha-104111)   <cpu mode='host-passthrough'>
	I0729 13:27:51.780327  992950 main.go:141] libmachine: (ha-104111)   
	I0729 13:27:51.780337  992950 main.go:141] libmachine: (ha-104111)   </cpu>
	I0729 13:27:51.780347  992950 main.go:141] libmachine: (ha-104111)   <os>
	I0729 13:27:51.780358  992950 main.go:141] libmachine: (ha-104111)     <type>hvm</type>
	I0729 13:27:51.780369  992950 main.go:141] libmachine: (ha-104111)     <boot dev='cdrom'/>
	I0729 13:27:51.780381  992950 main.go:141] libmachine: (ha-104111)     <boot dev='hd'/>
	I0729 13:27:51.780388  992950 main.go:141] libmachine: (ha-104111)     <bootmenu enable='no'/>
	I0729 13:27:51.780398  992950 main.go:141] libmachine: (ha-104111)   </os>
	I0729 13:27:51.780423  992950 main.go:141] libmachine: (ha-104111)   <devices>
	I0729 13:27:51.780436  992950 main.go:141] libmachine: (ha-104111)     <disk type='file' device='cdrom'>
	I0729 13:27:51.780453  992950 main.go:141] libmachine: (ha-104111)       <source file='/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/boot2docker.iso'/>
	I0729 13:27:51.780475  992950 main.go:141] libmachine: (ha-104111)       <target dev='hdc' bus='scsi'/>
	I0729 13:27:51.780487  992950 main.go:141] libmachine: (ha-104111)       <readonly/>
	I0729 13:27:51.780503  992950 main.go:141] libmachine: (ha-104111)     </disk>
	I0729 13:27:51.780538  992950 main.go:141] libmachine: (ha-104111)     <disk type='file' device='disk'>
	I0729 13:27:51.780565  992950 main.go:141] libmachine: (ha-104111)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 13:27:51.780584  992950 main.go:141] libmachine: (ha-104111)       <source file='/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/ha-104111.rawdisk'/>
	I0729 13:27:51.780594  992950 main.go:141] libmachine: (ha-104111)       <target dev='hda' bus='virtio'/>
	I0729 13:27:51.780606  992950 main.go:141] libmachine: (ha-104111)     </disk>
	I0729 13:27:51.780617  992950 main.go:141] libmachine: (ha-104111)     <interface type='network'>
	I0729 13:27:51.780626  992950 main.go:141] libmachine: (ha-104111)       <source network='mk-ha-104111'/>
	I0729 13:27:51.780636  992950 main.go:141] libmachine: (ha-104111)       <model type='virtio'/>
	I0729 13:27:51.780647  992950 main.go:141] libmachine: (ha-104111)     </interface>
	I0729 13:27:51.780656  992950 main.go:141] libmachine: (ha-104111)     <interface type='network'>
	I0729 13:27:51.780668  992950 main.go:141] libmachine: (ha-104111)       <source network='default'/>
	I0729 13:27:51.780677  992950 main.go:141] libmachine: (ha-104111)       <model type='virtio'/>
	I0729 13:27:51.780689  992950 main.go:141] libmachine: (ha-104111)     </interface>
	I0729 13:27:51.780699  992950 main.go:141] libmachine: (ha-104111)     <serial type='pty'>
	I0729 13:27:51.780710  992950 main.go:141] libmachine: (ha-104111)       <target port='0'/>
	I0729 13:27:51.780720  992950 main.go:141] libmachine: (ha-104111)     </serial>
	I0729 13:27:51.780743  992950 main.go:141] libmachine: (ha-104111)     <console type='pty'>
	I0729 13:27:51.780764  992950 main.go:141] libmachine: (ha-104111)       <target type='serial' port='0'/>
	I0729 13:27:51.780779  992950 main.go:141] libmachine: (ha-104111)     </console>
	I0729 13:27:51.780788  992950 main.go:141] libmachine: (ha-104111)     <rng model='virtio'>
	I0729 13:27:51.780799  992950 main.go:141] libmachine: (ha-104111)       <backend model='random'>/dev/random</backend>
	I0729 13:27:51.780809  992950 main.go:141] libmachine: (ha-104111)     </rng>
	I0729 13:27:51.780820  992950 main.go:141] libmachine: (ha-104111)     
	I0729 13:27:51.780839  992950 main.go:141] libmachine: (ha-104111)     
	I0729 13:27:51.780850  992950 main.go:141] libmachine: (ha-104111)   </devices>
	I0729 13:27:51.780859  992950 main.go:141] libmachine: (ha-104111) </domain>
	I0729 13:27:51.780869  992950 main.go:141] libmachine: (ha-104111) 
	I0729 13:27:51.784970  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:56:90:6c in network default
	I0729 13:27:51.785491  992950 main.go:141] libmachine: (ha-104111) Ensuring networks are active...
	I0729 13:27:51.785516  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:27:51.786082  992950 main.go:141] libmachine: (ha-104111) Ensuring network default is active
	I0729 13:27:51.786356  992950 main.go:141] libmachine: (ha-104111) Ensuring network mk-ha-104111 is active
	I0729 13:27:51.786802  992950 main.go:141] libmachine: (ha-104111) Getting domain xml...
	I0729 13:27:51.787733  992950 main.go:141] libmachine: (ha-104111) Creating domain...
	I0729 13:27:52.105613  992950 main.go:141] libmachine: (ha-104111) Waiting to get IP...
	I0729 13:27:52.106450  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:27:52.106796  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:27:52.106823  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:52.106778  992973 retry.go:31] will retry after 241.772351ms: waiting for machine to come up
	I0729 13:27:52.350209  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:27:52.350683  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:27:52.350711  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:52.350631  992973 retry.go:31] will retry after 337.465105ms: waiting for machine to come up
	I0729 13:27:52.690197  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:27:52.690572  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:27:52.690605  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:52.690510  992973 retry.go:31] will retry after 387.904142ms: waiting for machine to come up
	I0729 13:27:53.080125  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:27:53.080538  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:27:53.080567  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:53.080465  992973 retry.go:31] will retry after 487.916897ms: waiting for machine to come up
	I0729 13:27:53.570315  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:27:53.570738  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:27:53.570767  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:53.570686  992973 retry.go:31] will retry after 466.286646ms: waiting for machine to come up
	I0729 13:27:54.038226  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:27:54.038676  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:27:54.038721  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:54.038635  992973 retry.go:31] will retry after 815.865488ms: waiting for machine to come up
	I0729 13:27:54.856028  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:27:54.856378  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:27:54.856438  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:54.856337  992973 retry.go:31] will retry after 972.389168ms: waiting for machine to come up
	I0729 13:27:55.830484  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:27:55.830991  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:27:55.831018  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:55.830938  992973 retry.go:31] will retry after 1.143318078s: waiting for machine to come up
	I0729 13:27:56.975732  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:27:56.976170  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:27:56.976194  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:56.976118  992973 retry.go:31] will retry after 1.842354399s: waiting for machine to come up
	I0729 13:27:58.821254  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:27:58.821629  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:27:58.821659  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:58.821581  992973 retry.go:31] will retry after 1.46639238s: waiting for machine to come up
	I0729 13:28:00.290154  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:00.290479  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:28:00.290511  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:28:00.290422  992973 retry.go:31] will retry after 2.370742002s: waiting for machine to come up
	I0729 13:28:02.663791  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:02.664211  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:28:02.664241  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:28:02.664148  992973 retry.go:31] will retry after 2.99875569s: waiting for machine to come up
	I0729 13:28:05.666325  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:05.666722  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:28:05.666748  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:28:05.666682  992973 retry.go:31] will retry after 3.701072815s: waiting for machine to come up
	I0729 13:28:09.371868  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:09.372285  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:28:09.372311  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:28:09.372241  992973 retry.go:31] will retry after 5.605611611s: waiting for machine to come up
	I0729 13:28:14.983056  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:14.983474  992950 main.go:141] libmachine: (ha-104111) Found IP for machine: 192.168.39.120
	I0729 13:28:14.983490  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has current primary IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:14.983496  992950 main.go:141] libmachine: (ha-104111) Reserving static IP address...
	I0729 13:28:14.983848  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find host DHCP lease matching {name: "ha-104111", mac: "52:54:00:44:4b:6b", ip: "192.168.39.120"} in network mk-ha-104111
	I0729 13:28:15.055152  992950 main.go:141] libmachine: (ha-104111) Reserved static IP address: 192.168.39.120
	I0729 13:28:15.055179  992950 main.go:141] libmachine: (ha-104111) Waiting for SSH to be available...
	I0729 13:28:15.055188  992950 main.go:141] libmachine: (ha-104111) DBG | Getting to WaitForSSH function...
	I0729 13:28:15.058104  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.058535  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:minikube Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:15.058566  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.058694  992950 main.go:141] libmachine: (ha-104111) DBG | Using SSH client type: external
	I0729 13:28:15.058720  992950 main.go:141] libmachine: (ha-104111) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa (-rw-------)
	I0729 13:28:15.058739  992950 main.go:141] libmachine: (ha-104111) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.120 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:28:15.058761  992950 main.go:141] libmachine: (ha-104111) DBG | About to run SSH command:
	I0729 13:28:15.058783  992950 main.go:141] libmachine: (ha-104111) DBG | exit 0
	I0729 13:28:15.184333  992950 main.go:141] libmachine: (ha-104111) DBG | SSH cmd err, output: <nil>: 
	I0729 13:28:15.184642  992950 main.go:141] libmachine: (ha-104111) KVM machine creation complete!
	I0729 13:28:15.184936  992950 main.go:141] libmachine: (ha-104111) Calling .GetConfigRaw
	I0729 13:28:15.185506  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:28:15.185699  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:28:15.185826  992950 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 13:28:15.185841  992950 main.go:141] libmachine: (ha-104111) Calling .GetState
	I0729 13:28:15.187000  992950 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 13:28:15.187017  992950 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 13:28:15.187025  992950 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 13:28:15.187032  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:28:15.189218  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.189563  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:15.189587  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.189708  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:28:15.189900  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:15.190068  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:15.190193  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:28:15.190337  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:28:15.190582  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 13:28:15.190595  992950 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 13:28:15.299583  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:28:15.299618  992950 main.go:141] libmachine: Detecting the provisioner...
	I0729 13:28:15.299630  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:28:15.302494  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.302852  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:15.302883  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.302992  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:28:15.303196  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:15.303396  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:15.303552  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:28:15.303714  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:28:15.303892  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 13:28:15.303904  992950 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 13:28:15.412926  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 13:28:15.413014  992950 main.go:141] libmachine: found compatible host: buildroot
	I0729 13:28:15.413026  992950 main.go:141] libmachine: Provisioning with buildroot...
	I0729 13:28:15.413033  992950 main.go:141] libmachine: (ha-104111) Calling .GetMachineName
	I0729 13:28:15.413297  992950 buildroot.go:166] provisioning hostname "ha-104111"
	I0729 13:28:15.413328  992950 main.go:141] libmachine: (ha-104111) Calling .GetMachineName
	I0729 13:28:15.413518  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:28:15.416085  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.416327  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:15.416350  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.416526  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:28:15.416700  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:15.416856  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:15.416992  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:28:15.417113  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:28:15.417303  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 13:28:15.417317  992950 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-104111 && echo "ha-104111" | sudo tee /etc/hostname
	I0729 13:28:15.538759  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-104111
	
	I0729 13:28:15.538794  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:28:15.541286  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.541625  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:15.541651  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.541797  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:28:15.541965  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:15.542104  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:15.542283  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:28:15.542420  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:28:15.542627  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 13:28:15.542648  992950 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-104111' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-104111/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-104111' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:28:15.661668  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:28:15.661699  992950 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 13:28:15.661719  992950 buildroot.go:174] setting up certificates
	I0729 13:28:15.661730  992950 provision.go:84] configureAuth start
	I0729 13:28:15.661739  992950 main.go:141] libmachine: (ha-104111) Calling .GetMachineName
	I0729 13:28:15.662041  992950 main.go:141] libmachine: (ha-104111) Calling .GetIP
	I0729 13:28:15.664715  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.665028  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:15.665070  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.665202  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:28:15.667336  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.667669  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:15.667698  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.667817  992950 provision.go:143] copyHostCerts
	I0729 13:28:15.667848  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 13:28:15.667888  992950 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 13:28:15.667898  992950 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 13:28:15.667967  992950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 13:28:15.668070  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 13:28:15.668090  992950 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 13:28:15.668097  992950 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 13:28:15.668121  992950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 13:28:15.668177  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 13:28:15.668193  992950 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 13:28:15.668201  992950 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 13:28:15.668223  992950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 13:28:15.668289  992950 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.ha-104111 san=[127.0.0.1 192.168.39.120 ha-104111 localhost minikube]
	I0729 13:28:15.745030  992950 provision.go:177] copyRemoteCerts
	I0729 13:28:15.745104  992950 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:28:15.745130  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:28:15.747826  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.748132  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:15.748155  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.748305  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:28:15.748534  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:15.748704  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:28:15.748814  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:28:15.834722  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 13:28:15.834800  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:28:15.858258  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 13:28:15.858319  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0729 13:28:15.880816  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 13:28:15.880885  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 13:28:15.903313  992950 provision.go:87] duration metric: took 241.568168ms to configureAuth
	I0729 13:28:15.903338  992950 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:28:15.903546  992950 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:28:15.903651  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:28:15.906022  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.906348  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:15.906377  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.906480  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:28:15.906698  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:15.906854  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:15.906988  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:28:15.907116  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:28:15.907270  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 13:28:15.907284  992950 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:28:16.174285  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:28:16.174337  992950 main.go:141] libmachine: Checking connection to Docker...
	I0729 13:28:16.174347  992950 main.go:141] libmachine: (ha-104111) Calling .GetURL
	I0729 13:28:16.175719  992950 main.go:141] libmachine: (ha-104111) DBG | Using libvirt version 6000000
	I0729 13:28:16.177617  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.177975  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:16.177996  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.178211  992950 main.go:141] libmachine: Docker is up and running!
	I0729 13:28:16.178227  992950 main.go:141] libmachine: Reticulating splines...
	I0729 13:28:16.178234  992950 client.go:171] duration metric: took 25.300294822s to LocalClient.Create
	I0729 13:28:16.178257  992950 start.go:167] duration metric: took 25.300358917s to libmachine.API.Create "ha-104111"
	I0729 13:28:16.178267  992950 start.go:293] postStartSetup for "ha-104111" (driver="kvm2")
	I0729 13:28:16.178277  992950 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:28:16.178312  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:28:16.178559  992950 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:28:16.178603  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:28:16.180432  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.180790  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:16.180815  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.180971  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:28:16.181146  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:16.181307  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:28:16.181441  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:28:16.266860  992950 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:28:16.271002  992950 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:28:16.271024  992950 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 13:28:16.271102  992950 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 13:28:16.271194  992950 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 13:28:16.271206  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> /etc/ssl/certs/9820462.pem
	I0729 13:28:16.271336  992950 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:28:16.280902  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 13:28:16.303532  992950 start.go:296] duration metric: took 125.253889ms for postStartSetup
	I0729 13:28:16.303585  992950 main.go:141] libmachine: (ha-104111) Calling .GetConfigRaw
	I0729 13:28:16.304161  992950 main.go:141] libmachine: (ha-104111) Calling .GetIP
	I0729 13:28:16.306579  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.306900  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:16.306926  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.307255  992950 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/config.json ...
	I0729 13:28:16.307422  992950 start.go:128] duration metric: took 25.447892576s to createHost
	I0729 13:28:16.307517  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:28:16.309538  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.309806  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:16.309833  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.309947  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:28:16.310134  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:16.310254  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:16.310380  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:28:16.310505  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:28:16.310696  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 13:28:16.310711  992950 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:28:16.421166  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722259696.378704676
	
	I0729 13:28:16.421193  992950 fix.go:216] guest clock: 1722259696.378704676
	I0729 13:28:16.421200  992950 fix.go:229] Guest: 2024-07-29 13:28:16.378704676 +0000 UTC Remote: 2024-07-29 13:28:16.307433053 +0000 UTC m=+25.553437361 (delta=71.271623ms)
	I0729 13:28:16.421219  992950 fix.go:200] guest clock delta is within tolerance: 71.271623ms
	I0729 13:28:16.421225  992950 start.go:83] releasing machines lock for "ha-104111", held for 25.56178633s
	I0729 13:28:16.421244  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:28:16.421537  992950 main.go:141] libmachine: (ha-104111) Calling .GetIP
	I0729 13:28:16.424050  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.424391  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:16.424439  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.424587  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:28:16.425069  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:28:16.425247  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:28:16.425376  992950 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:28:16.425427  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:28:16.425498  992950 ssh_runner.go:195] Run: cat /version.json
	I0729 13:28:16.425525  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:28:16.427791  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.428091  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:16.428116  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.428202  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.428218  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:28:16.428381  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:16.428565  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:28:16.428621  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:16.428641  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.428729  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:28:16.428831  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:28:16.428986  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:16.429148  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:28:16.429271  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:28:16.534145  992950 ssh_runner.go:195] Run: systemctl --version
	I0729 13:28:16.539884  992950 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:28:16.694793  992950 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:28:16.700796  992950 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:28:16.700850  992950 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:28:16.715914  992950 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:28:16.715935  992950 start.go:495] detecting cgroup driver to use...
	I0729 13:28:16.715997  992950 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:28:16.732889  992950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:28:16.746761  992950 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:28:16.746830  992950 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:28:16.759847  992950 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:28:16.772834  992950 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:28:16.884479  992950 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:28:17.022280  992950 docker.go:233] disabling docker service ...
	I0729 13:28:17.022353  992950 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:28:17.036668  992950 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:28:17.049204  992950 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:28:17.177884  992950 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:28:17.296976  992950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:28:17.310302  992950 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:28:17.327927  992950 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:28:17.327986  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:28:17.337890  992950 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:28:17.337961  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:28:17.347740  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:28:17.357142  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:28:17.367108  992950 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:28:17.377001  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:28:17.386599  992950 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:28:17.403144  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:28:17.413026  992950 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:28:17.421978  992950 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:28:17.422020  992950 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:28:17.434078  992950 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:28:17.442867  992950 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:28:17.567268  992950 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:28:17.707576  992950 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:28:17.707669  992950 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:28:17.712696  992950 start.go:563] Will wait 60s for crictl version
	I0729 13:28:17.712753  992950 ssh_runner.go:195] Run: which crictl
	I0729 13:28:17.716312  992950 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:28:17.754087  992950 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:28:17.754174  992950 ssh_runner.go:195] Run: crio --version
	I0729 13:28:17.783004  992950 ssh_runner.go:195] Run: crio --version
	I0729 13:28:17.813418  992950 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:28:17.814636  992950 main.go:141] libmachine: (ha-104111) Calling .GetIP
	I0729 13:28:17.816916  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:17.817297  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:17.817315  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:17.817564  992950 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 13:28:17.821740  992950 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:28:17.834267  992950 kubeadm.go:883] updating cluster {Name:ha-104111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-104111 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:28:17.834380  992950 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:28:17.834420  992950 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:28:17.865819  992950 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 13:28:17.865905  992950 ssh_runner.go:195] Run: which lz4
	I0729 13:28:17.869709  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0729 13:28:17.869806  992950 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 13:28:17.873918  992950 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:28:17.873958  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 13:28:19.218990  992950 crio.go:462] duration metric: took 1.349212991s to copy over tarball
	I0729 13:28:19.219082  992950 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:28:21.351652  992950 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.132536185s)
	I0729 13:28:21.351681  992950 crio.go:469] duration metric: took 2.132659596s to extract the tarball
	I0729 13:28:21.351689  992950 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:28:21.389185  992950 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:28:21.438048  992950 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:28:21.438073  992950 cache_images.go:84] Images are preloaded, skipping loading
	I0729 13:28:21.438083  992950 kubeadm.go:934] updating node { 192.168.39.120 8443 v1.30.3 crio true true} ...
	I0729 13:28:21.438242  992950 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-104111 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-104111 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:28:21.438324  992950 ssh_runner.go:195] Run: crio config
	I0729 13:28:21.483647  992950 cni.go:84] Creating CNI manager for ""
	I0729 13:28:21.483671  992950 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 13:28:21.483680  992950 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:28:21.483703  992950 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.120 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-104111 NodeName:ha-104111 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:28:21.483857  992950 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-104111"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.120
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.120"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:28:21.483889  992950 kube-vip.go:115] generating kube-vip config ...
	I0729 13:28:21.483932  992950 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 13:28:21.503075  992950 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 13:28:21.503238  992950 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0729 13:28:21.503310  992950 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 13:28:21.513555  992950 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:28:21.513634  992950 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 13:28:21.523417  992950 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 13:28:21.539906  992950 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:28:21.556171  992950 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 13:28:21.572306  992950 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0729 13:28:21.588601  992950 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 13:28:21.592303  992950 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:28:21.604503  992950 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:28:21.732996  992950 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:28:21.751100  992950 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111 for IP: 192.168.39.120
	I0729 13:28:21.751125  992950 certs.go:194] generating shared ca certs ...
	I0729 13:28:21.751141  992950 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:28:21.751320  992950 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 13:28:21.751382  992950 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 13:28:21.751396  992950 certs.go:256] generating profile certs ...
	I0729 13:28:21.751456  992950 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.key
	I0729 13:28:21.751472  992950 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.crt with IP's: []
	I0729 13:28:22.105163  992950 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.crt ...
	I0729 13:28:22.105196  992950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.crt: {Name:mkc3fe2e5d41e3efc36f038ba4c6055663b8dc02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:28:22.105368  992950 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.key ...
	I0729 13:28:22.105378  992950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.key: {Name:mk3b1e6250db3fab7db8560f50a7c8f8313bd412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:28:22.105452  992950 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.bdd936fb
	I0729 13:28:22.105467  992950 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.bdd936fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.120 192.168.39.254]
	I0729 13:28:22.236602  992950 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.bdd936fb ...
	I0729 13:28:22.236632  992950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.bdd936fb: {Name:mk2839a37647f2d64573698795d7cf40367c9e2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:28:22.236786  992950 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.bdd936fb ...
	I0729 13:28:22.236800  992950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.bdd936fb: {Name:mk9ce3510a255664f7a806593cc42fe59a2e626d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:28:22.236872  992950 certs.go:381] copying /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.bdd936fb -> /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt
	I0729 13:28:22.236966  992950 certs.go:385] copying /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.bdd936fb -> /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key
	I0729 13:28:22.237027  992950 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key
	I0729 13:28:22.237047  992950 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.crt with IP's: []
	I0729 13:28:22.293825  992950 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.crt ...
	I0729 13:28:22.293857  992950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.crt: {Name:mk68f198bf63d825af64973559bb29938c0cec2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:28:22.294027  992950 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key ...
	I0729 13:28:22.294038  992950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key: {Name:mk492dd4f00a939c2ebdc925d86fe11b5b3b16fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:28:22.294112  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 13:28:22.294129  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 13:28:22.294142  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 13:28:22.294154  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 13:28:22.294165  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 13:28:22.294177  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 13:28:22.294189  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 13:28:22.294200  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 13:28:22.294260  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 13:28:22.294293  992950 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 13:28:22.294303  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:28:22.294328  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:28:22.294350  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:28:22.294373  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 13:28:22.294407  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 13:28:22.294433  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> /usr/share/ca-certificates/9820462.pem
	I0729 13:28:22.294446  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:28:22.294457  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem -> /usr/share/ca-certificates/982046.pem
	I0729 13:28:22.295038  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:28:22.321141  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:28:22.345483  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:28:22.368908  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 13:28:22.391452  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 13:28:22.414268  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 13:28:22.436942  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:28:22.459693  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 13:28:22.481930  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 13:28:22.504273  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:28:22.529719  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 13:28:22.552777  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:28:22.569738  992950 ssh_runner.go:195] Run: openssl version
	I0729 13:28:22.575826  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 13:28:22.586583  992950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 13:28:22.592046  992950 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 13:28:22.592145  992950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 13:28:22.597882  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:28:22.608836  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:28:22.619552  992950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:28:22.624002  992950 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:28:22.624055  992950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:28:22.629501  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:28:22.640082  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 13:28:22.650915  992950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 13:28:22.654989  992950 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 13:28:22.655049  992950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 13:28:22.660522  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 13:28:22.671050  992950 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:28:22.674773  992950 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 13:28:22.674837  992950 kubeadm.go:392] StartCluster: {Name:ha-104111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-104111 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:28:22.674926  992950 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:28:22.674982  992950 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:28:22.713478  992950 cri.go:89] found id: ""
	I0729 13:28:22.713543  992950 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:28:22.723732  992950 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:28:22.736813  992950 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:28:22.747594  992950 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:28:22.747627  992950 kubeadm.go:157] found existing configuration files:
	
	I0729 13:28:22.747680  992950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:28:22.757528  992950 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:28:22.757590  992950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:28:22.768035  992950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:28:22.777779  992950 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:28:22.777834  992950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:28:22.787666  992950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:28:22.797092  992950 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:28:22.797144  992950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:28:22.806932  992950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:28:22.816137  992950 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:28:22.816184  992950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:28:22.825638  992950 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:28:23.067098  992950 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:28:33.934512  992950 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 13:28:33.934579  992950 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:28:33.934734  992950 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:28:33.934887  992950 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:28:33.934981  992950 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:28:33.935067  992950 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:28:33.936487  992950 out.go:204]   - Generating certificates and keys ...
	I0729 13:28:33.936597  992950 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:28:33.936688  992950 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:28:33.936793  992950 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 13:28:33.936886  992950 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 13:28:33.936969  992950 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 13:28:33.937034  992950 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 13:28:33.937131  992950 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 13:28:33.937308  992950 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-104111 localhost] and IPs [192.168.39.120 127.0.0.1 ::1]
	I0729 13:28:33.937399  992950 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 13:28:33.937555  992950 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-104111 localhost] and IPs [192.168.39.120 127.0.0.1 ::1]
	I0729 13:28:33.937654  992950 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 13:28:33.937736  992950 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 13:28:33.937815  992950 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 13:28:33.937897  992950 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:28:33.937961  992950 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:28:33.938042  992950 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 13:28:33.938140  992950 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:28:33.938235  992950 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:28:33.938314  992950 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:28:33.938428  992950 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:28:33.938503  992950 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:28:33.940206  992950 out.go:204]   - Booting up control plane ...
	I0729 13:28:33.940284  992950 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:28:33.940350  992950 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:28:33.940420  992950 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:28:33.940549  992950 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:28:33.940701  992950 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:28:33.940753  992950 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:28:33.940941  992950 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 13:28:33.941052  992950 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 13:28:33.941135  992950 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001569089s
	I0729 13:28:33.941215  992950 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 13:28:33.941269  992950 kubeadm.go:310] [api-check] The API server is healthy after 5.806345069s
	I0729 13:28:33.941372  992950 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 13:28:33.941474  992950 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 13:28:33.941536  992950 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 13:28:33.941692  992950 kubeadm.go:310] [mark-control-plane] Marking the node ha-104111 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 13:28:33.941743  992950 kubeadm.go:310] [bootstrap-token] Using token: xdwoky.ewd6hddagkcpyjfo
	I0729 13:28:33.942935  992950 out.go:204]   - Configuring RBAC rules ...
	I0729 13:28:33.943017  992950 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 13:28:33.943085  992950 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 13:28:33.943232  992950 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 13:28:33.943352  992950 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 13:28:33.943451  992950 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 13:28:33.943522  992950 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 13:28:33.943616  992950 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 13:28:33.943657  992950 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 13:28:33.943695  992950 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 13:28:33.943701  992950 kubeadm.go:310] 
	I0729 13:28:33.943749  992950 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 13:28:33.943755  992950 kubeadm.go:310] 
	I0729 13:28:33.943821  992950 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 13:28:33.943830  992950 kubeadm.go:310] 
	I0729 13:28:33.943883  992950 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 13:28:33.943964  992950 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 13:28:33.944031  992950 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 13:28:33.944041  992950 kubeadm.go:310] 
	I0729 13:28:33.944095  992950 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 13:28:33.944101  992950 kubeadm.go:310] 
	I0729 13:28:33.944139  992950 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 13:28:33.944147  992950 kubeadm.go:310] 
	I0729 13:28:33.944209  992950 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 13:28:33.944291  992950 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 13:28:33.944362  992950 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 13:28:33.944371  992950 kubeadm.go:310] 
	I0729 13:28:33.944457  992950 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 13:28:33.944527  992950 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 13:28:33.944533  992950 kubeadm.go:310] 
	I0729 13:28:33.944604  992950 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xdwoky.ewd6hddagkcpyjfo \
	I0729 13:28:33.944708  992950 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 \
	I0729 13:28:33.944733  992950 kubeadm.go:310] 	--control-plane 
	I0729 13:28:33.944737  992950 kubeadm.go:310] 
	I0729 13:28:33.944818  992950 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 13:28:33.944825  992950 kubeadm.go:310] 
	I0729 13:28:33.944894  992950 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xdwoky.ewd6hddagkcpyjfo \
	I0729 13:28:33.944991  992950 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 
	I0729 13:28:33.945005  992950 cni.go:84] Creating CNI manager for ""
	I0729 13:28:33.945012  992950 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 13:28:33.946411  992950 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0729 13:28:33.947617  992950 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0729 13:28:33.954558  992950 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0729 13:28:33.954583  992950 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0729 13:28:33.977635  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0729 13:28:34.350635  992950 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 13:28:34.350736  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:34.350773  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-104111 minikube.k8s.io/updated_at=2024_07_29T13_28_34_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411 minikube.k8s.io/name=ha-104111 minikube.k8s.io/primary=true
	I0729 13:28:34.565795  992950 ops.go:34] apiserver oom_adj: -16
	I0729 13:28:34.566016  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:35.066116  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:35.566930  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:36.066139  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:36.566669  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:37.066772  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:37.566404  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:38.066899  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:38.566503  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:39.066796  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:39.566354  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:40.066322  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:40.566701  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:41.066952  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:41.566851  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:42.066976  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:42.566510  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:43.066525  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:43.566299  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:44.066428  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:44.566724  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:45.066936  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:45.566316  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:46.066630  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:46.202876  992950 kubeadm.go:1113] duration metric: took 11.852222314s to wait for elevateKubeSystemPrivileges
	I0729 13:28:46.202909  992950 kubeadm.go:394] duration metric: took 23.528078246s to StartCluster
	I0729 13:28:46.202936  992950 settings.go:142] acquiring lock: {Name:mke61e73d7bb1a5bd9c2f4c9e9bba0a07b199ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:28:46.203057  992950 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 13:28:46.204061  992950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:28:46.204293  992950 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 13:28:46.204333  992950 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:28:46.204365  992950 start.go:241] waiting for startup goroutines ...
	I0729 13:28:46.204378  992950 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 13:28:46.204472  992950 addons.go:69] Setting storage-provisioner=true in profile "ha-104111"
	I0729 13:28:46.204481  992950 addons.go:69] Setting default-storageclass=true in profile "ha-104111"
	I0729 13:28:46.204521  992950 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-104111"
	I0729 13:28:46.204600  992950 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:28:46.204522  992950 addons.go:234] Setting addon storage-provisioner=true in "ha-104111"
	I0729 13:28:46.204656  992950 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:28:46.204992  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:28:46.205027  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:28:46.205067  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:28:46.205100  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:28:46.220591  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44459
	I0729 13:28:46.220885  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40515
	I0729 13:28:46.221108  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:28:46.221364  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:28:46.221622  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:28:46.221641  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:28:46.221934  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:28:46.221961  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:28:46.222000  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:28:46.222192  992950 main.go:141] libmachine: (ha-104111) Calling .GetState
	I0729 13:28:46.222244  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:28:46.222702  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:28:46.222731  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:28:46.224507  992950 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 13:28:46.224850  992950 kapi.go:59] client config for ha-104111: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.crt", KeyFile:"/home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.key", CAFile:"/home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 13:28:46.225375  992950 cert_rotation.go:137] Starting client certificate rotation controller
	I0729 13:28:46.225638  992950 addons.go:234] Setting addon default-storageclass=true in "ha-104111"
	I0729 13:28:46.225684  992950 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:28:46.226049  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:28:46.226078  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:28:46.237878  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39635
	I0729 13:28:46.238450  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:28:46.239005  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:28:46.239032  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:28:46.239471  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:28:46.239692  992950 main.go:141] libmachine: (ha-104111) Calling .GetState
	I0729 13:28:46.241072  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38421
	I0729 13:28:46.241493  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:28:46.241561  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:28:46.241990  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:28:46.242009  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:28:46.242346  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:28:46.242823  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:28:46.242863  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:28:46.243202  992950 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:28:46.244666  992950 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:28:46.244690  992950 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 13:28:46.244721  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:28:46.247848  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:46.248337  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:46.248358  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:46.248525  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:28:46.248674  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:46.248798  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:28:46.248903  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:28:46.259524  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40283
	I0729 13:28:46.259902  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:28:46.260306  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:28:46.260328  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:28:46.260691  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:28:46.260872  992950 main.go:141] libmachine: (ha-104111) Calling .GetState
	I0729 13:28:46.262289  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:28:46.262543  992950 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 13:28:46.262560  992950 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 13:28:46.262577  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:28:46.265345  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:46.265755  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:46.265780  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:46.265937  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:28:46.266121  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:46.266267  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:28:46.266406  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:28:46.392436  992950 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 13:28:46.409069  992950 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:28:46.459119  992950 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 13:28:46.874034  992950 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 13:28:47.023431  992950 main.go:141] libmachine: Making call to close driver server
	I0729 13:28:47.023456  992950 main.go:141] libmachine: (ha-104111) Calling .Close
	I0729 13:28:47.023494  992950 main.go:141] libmachine: Making call to close driver server
	I0729 13:28:47.023517  992950 main.go:141] libmachine: (ha-104111) Calling .Close
	I0729 13:28:47.023752  992950 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:28:47.023768  992950 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:28:47.023778  992950 main.go:141] libmachine: Making call to close driver server
	I0729 13:28:47.023806  992950 main.go:141] libmachine: (ha-104111) Calling .Close
	I0729 13:28:47.023888  992950 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:28:47.023900  992950 main.go:141] libmachine: (ha-104111) DBG | Closing plugin on server side
	I0729 13:28:47.023904  992950 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:28:47.023922  992950 main.go:141] libmachine: Making call to close driver server
	I0729 13:28:47.023928  992950 main.go:141] libmachine: (ha-104111) Calling .Close
	I0729 13:28:47.024068  992950 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:28:47.024079  992950 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:28:47.024210  992950 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0729 13:28:47.024216  992950 round_trippers.go:469] Request Headers:
	I0729 13:28:47.024226  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:28:47.024239  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:28:47.024336  992950 main.go:141] libmachine: (ha-104111) DBG | Closing plugin on server side
	I0729 13:28:47.024397  992950 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:28:47.024467  992950 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:28:47.034431  992950 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0729 13:28:47.035185  992950 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0729 13:28:47.035203  992950 round_trippers.go:469] Request Headers:
	I0729 13:28:47.035211  992950 round_trippers.go:473]     Content-Type: application/json
	I0729 13:28:47.035216  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:28:47.035228  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:28:47.037982  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:28:47.038136  992950 main.go:141] libmachine: Making call to close driver server
	I0729 13:28:47.038149  992950 main.go:141] libmachine: (ha-104111) Calling .Close
	I0729 13:28:47.038377  992950 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:28:47.038394  992950 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:28:47.039939  992950 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0729 13:28:47.040923  992950 addons.go:510] duration metric: took 836.544864ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0729 13:28:47.040958  992950 start.go:246] waiting for cluster config update ...
	I0729 13:28:47.040973  992950 start.go:255] writing updated cluster config ...
	I0729 13:28:47.042655  992950 out.go:177] 
	I0729 13:28:47.043885  992950 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:28:47.043950  992950 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/config.json ...
	I0729 13:28:47.045507  992950 out.go:177] * Starting "ha-104111-m02" control-plane node in "ha-104111" cluster
	I0729 13:28:47.046600  992950 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:28:47.046624  992950 cache.go:56] Caching tarball of preloaded images
	I0729 13:28:47.046710  992950 preload.go:172] Found /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:28:47.046721  992950 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 13:28:47.046824  992950 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/config.json ...
	I0729 13:28:47.047024  992950 start.go:360] acquireMachinesLock for ha-104111-m02: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:28:47.047070  992950 start.go:364] duration metric: took 27.861µs to acquireMachinesLock for "ha-104111-m02"
	I0729 13:28:47.047087  992950 start.go:93] Provisioning new machine with config: &{Name:ha-104111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-104111 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:28:47.047151  992950 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0729 13:28:47.048656  992950 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 13:28:47.048750  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:28:47.048774  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:28:47.063244  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33333
	I0729 13:28:47.063627  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:28:47.064088  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:28:47.064106  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:28:47.064446  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:28:47.064680  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetMachineName
	I0729 13:28:47.064824  992950 main.go:141] libmachine: (ha-104111-m02) Calling .DriverName
	I0729 13:28:47.064990  992950 start.go:159] libmachine.API.Create for "ha-104111" (driver="kvm2")
	I0729 13:28:47.065015  992950 client.go:168] LocalClient.Create starting
	I0729 13:28:47.065059  992950 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem
	I0729 13:28:47.065096  992950 main.go:141] libmachine: Decoding PEM data...
	I0729 13:28:47.065117  992950 main.go:141] libmachine: Parsing certificate...
	I0729 13:28:47.065193  992950 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem
	I0729 13:28:47.065218  992950 main.go:141] libmachine: Decoding PEM data...
	I0729 13:28:47.065231  992950 main.go:141] libmachine: Parsing certificate...
	I0729 13:28:47.065259  992950 main.go:141] libmachine: Running pre-create checks...
	I0729 13:28:47.065270  992950 main.go:141] libmachine: (ha-104111-m02) Calling .PreCreateCheck
	I0729 13:28:47.065465  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetConfigRaw
	I0729 13:28:47.065860  992950 main.go:141] libmachine: Creating machine...
	I0729 13:28:47.065877  992950 main.go:141] libmachine: (ha-104111-m02) Calling .Create
	I0729 13:28:47.066036  992950 main.go:141] libmachine: (ha-104111-m02) Creating KVM machine...
	I0729 13:28:47.067122  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found existing default KVM network
	I0729 13:28:47.067317  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found existing private KVM network mk-ha-104111
	I0729 13:28:47.067471  992950 main.go:141] libmachine: (ha-104111-m02) Setting up store path in /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02 ...
	I0729 13:28:47.067497  992950 main.go:141] libmachine: (ha-104111-m02) Building disk image from file:///home/jenkins/minikube-integration/19338-974764/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 13:28:47.067565  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:47.067447  993329 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 13:28:47.067661  992950 main.go:141] libmachine: (ha-104111-m02) Downloading /home/jenkins/minikube-integration/19338-974764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19338-974764/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 13:28:47.329794  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:47.329677  993329 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/id_rsa...
	I0729 13:28:47.429305  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:47.429180  993329 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/ha-104111-m02.rawdisk...
	I0729 13:28:47.429340  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Writing magic tar header
	I0729 13:28:47.429356  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Writing SSH key tar header
	I0729 13:28:47.429370  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:47.429299  993329 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02 ...
	I0729 13:28:47.429472  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02
	I0729 13:28:47.429524  992950 main.go:141] libmachine: (ha-104111-m02) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02 (perms=drwx------)
	I0729 13:28:47.429537  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube/machines
	I0729 13:28:47.429555  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 13:28:47.429568  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764
	I0729 13:28:47.429582  992950 main.go:141] libmachine: (ha-104111-m02) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube/machines (perms=drwxr-xr-x)
	I0729 13:28:47.429595  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 13:28:47.429609  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Checking permissions on dir: /home/jenkins
	I0729 13:28:47.429626  992950 main.go:141] libmachine: (ha-104111-m02) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube (perms=drwxr-xr-x)
	I0729 13:28:47.429634  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Checking permissions on dir: /home
	I0729 13:28:47.429645  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Skipping /home - not owner
	I0729 13:28:47.429670  992950 main.go:141] libmachine: (ha-104111-m02) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764 (perms=drwxrwxr-x)
	I0729 13:28:47.429687  992950 main.go:141] libmachine: (ha-104111-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 13:28:47.429698  992950 main.go:141] libmachine: (ha-104111-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 13:28:47.429708  992950 main.go:141] libmachine: (ha-104111-m02) Creating domain...
	I0729 13:28:47.430594  992950 main.go:141] libmachine: (ha-104111-m02) define libvirt domain using xml: 
	I0729 13:28:47.430619  992950 main.go:141] libmachine: (ha-104111-m02) <domain type='kvm'>
	I0729 13:28:47.430630  992950 main.go:141] libmachine: (ha-104111-m02)   <name>ha-104111-m02</name>
	I0729 13:28:47.430664  992950 main.go:141] libmachine: (ha-104111-m02)   <memory unit='MiB'>2200</memory>
	I0729 13:28:47.430688  992950 main.go:141] libmachine: (ha-104111-m02)   <vcpu>2</vcpu>
	I0729 13:28:47.430695  992950 main.go:141] libmachine: (ha-104111-m02)   <features>
	I0729 13:28:47.430706  992950 main.go:141] libmachine: (ha-104111-m02)     <acpi/>
	I0729 13:28:47.430715  992950 main.go:141] libmachine: (ha-104111-m02)     <apic/>
	I0729 13:28:47.430737  992950 main.go:141] libmachine: (ha-104111-m02)     <pae/>
	I0729 13:28:47.430758  992950 main.go:141] libmachine: (ha-104111-m02)     
	I0729 13:28:47.430792  992950 main.go:141] libmachine: (ha-104111-m02)   </features>
	I0729 13:28:47.430817  992950 main.go:141] libmachine: (ha-104111-m02)   <cpu mode='host-passthrough'>
	I0729 13:28:47.430830  992950 main.go:141] libmachine: (ha-104111-m02)   
	I0729 13:28:47.430847  992950 main.go:141] libmachine: (ha-104111-m02)   </cpu>
	I0729 13:28:47.430859  992950 main.go:141] libmachine: (ha-104111-m02)   <os>
	I0729 13:28:47.430867  992950 main.go:141] libmachine: (ha-104111-m02)     <type>hvm</type>
	I0729 13:28:47.430997  992950 main.go:141] libmachine: (ha-104111-m02)     <boot dev='cdrom'/>
	I0729 13:28:47.431039  992950 main.go:141] libmachine: (ha-104111-m02)     <boot dev='hd'/>
	I0729 13:28:47.431054  992950 main.go:141] libmachine: (ha-104111-m02)     <bootmenu enable='no'/>
	I0729 13:28:47.431061  992950 main.go:141] libmachine: (ha-104111-m02)   </os>
	I0729 13:28:47.431070  992950 main.go:141] libmachine: (ha-104111-m02)   <devices>
	I0729 13:28:47.431082  992950 main.go:141] libmachine: (ha-104111-m02)     <disk type='file' device='cdrom'>
	I0729 13:28:47.431097  992950 main.go:141] libmachine: (ha-104111-m02)       <source file='/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/boot2docker.iso'/>
	I0729 13:28:47.431115  992950 main.go:141] libmachine: (ha-104111-m02)       <target dev='hdc' bus='scsi'/>
	I0729 13:28:47.431135  992950 main.go:141] libmachine: (ha-104111-m02)       <readonly/>
	I0729 13:28:47.431145  992950 main.go:141] libmachine: (ha-104111-m02)     </disk>
	I0729 13:28:47.431155  992950 main.go:141] libmachine: (ha-104111-m02)     <disk type='file' device='disk'>
	I0729 13:28:47.431169  992950 main.go:141] libmachine: (ha-104111-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 13:28:47.431192  992950 main.go:141] libmachine: (ha-104111-m02)       <source file='/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/ha-104111-m02.rawdisk'/>
	I0729 13:28:47.431210  992950 main.go:141] libmachine: (ha-104111-m02)       <target dev='hda' bus='virtio'/>
	I0729 13:28:47.431219  992950 main.go:141] libmachine: (ha-104111-m02)     </disk>
	I0729 13:28:47.431228  992950 main.go:141] libmachine: (ha-104111-m02)     <interface type='network'>
	I0729 13:28:47.431241  992950 main.go:141] libmachine: (ha-104111-m02)       <source network='mk-ha-104111'/>
	I0729 13:28:47.431251  992950 main.go:141] libmachine: (ha-104111-m02)       <model type='virtio'/>
	I0729 13:28:47.431264  992950 main.go:141] libmachine: (ha-104111-m02)     </interface>
	I0729 13:28:47.431285  992950 main.go:141] libmachine: (ha-104111-m02)     <interface type='network'>
	I0729 13:28:47.431298  992950 main.go:141] libmachine: (ha-104111-m02)       <source network='default'/>
	I0729 13:28:47.431307  992950 main.go:141] libmachine: (ha-104111-m02)       <model type='virtio'/>
	I0729 13:28:47.431332  992950 main.go:141] libmachine: (ha-104111-m02)     </interface>
	I0729 13:28:47.431354  992950 main.go:141] libmachine: (ha-104111-m02)     <serial type='pty'>
	I0729 13:28:47.431387  992950 main.go:141] libmachine: (ha-104111-m02)       <target port='0'/>
	I0729 13:28:47.431407  992950 main.go:141] libmachine: (ha-104111-m02)     </serial>
	I0729 13:28:47.431437  992950 main.go:141] libmachine: (ha-104111-m02)     <console type='pty'>
	I0729 13:28:47.431459  992950 main.go:141] libmachine: (ha-104111-m02)       <target type='serial' port='0'/>
	I0729 13:28:47.431469  992950 main.go:141] libmachine: (ha-104111-m02)     </console>
	I0729 13:28:47.431479  992950 main.go:141] libmachine: (ha-104111-m02)     <rng model='virtio'>
	I0729 13:28:47.431490  992950 main.go:141] libmachine: (ha-104111-m02)       <backend model='random'>/dev/random</backend>
	I0729 13:28:47.431498  992950 main.go:141] libmachine: (ha-104111-m02)     </rng>
	I0729 13:28:47.431509  992950 main.go:141] libmachine: (ha-104111-m02)     
	I0729 13:28:47.431527  992950 main.go:141] libmachine: (ha-104111-m02)     
	I0729 13:28:47.431537  992950 main.go:141] libmachine: (ha-104111-m02)   </devices>
	I0729 13:28:47.431546  992950 main.go:141] libmachine: (ha-104111-m02) </domain>
	I0729 13:28:47.431554  992950 main.go:141] libmachine: (ha-104111-m02) 
	I0729 13:28:47.437835  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:64:b7:88 in network default
	I0729 13:28:47.438408  992950 main.go:141] libmachine: (ha-104111-m02) Ensuring networks are active...
	I0729 13:28:47.438428  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:28:47.439098  992950 main.go:141] libmachine: (ha-104111-m02) Ensuring network default is active
	I0729 13:28:47.439426  992950 main.go:141] libmachine: (ha-104111-m02) Ensuring network mk-ha-104111 is active
	I0729 13:28:47.439808  992950 main.go:141] libmachine: (ha-104111-m02) Getting domain xml...
	I0729 13:28:47.440645  992950 main.go:141] libmachine: (ha-104111-m02) Creating domain...
	I0729 13:28:47.750199  992950 main.go:141] libmachine: (ha-104111-m02) Waiting to get IP...
	I0729 13:28:47.751151  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:28:47.751536  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find current IP address of domain ha-104111-m02 in network mk-ha-104111
	I0729 13:28:47.751564  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:47.751502  993329 retry.go:31] will retry after 191.367372ms: waiting for machine to come up
	I0729 13:28:47.945071  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:28:47.945535  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find current IP address of domain ha-104111-m02 in network mk-ha-104111
	I0729 13:28:47.945568  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:47.945487  993329 retry.go:31] will retry after 272.868972ms: waiting for machine to come up
	I0729 13:28:48.220189  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:28:48.220776  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find current IP address of domain ha-104111-m02 in network mk-ha-104111
	I0729 13:28:48.220809  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:48.220718  993329 retry.go:31] will retry after 480.381516ms: waiting for machine to come up
	I0729 13:28:48.702452  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:28:48.702934  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find current IP address of domain ha-104111-m02 in network mk-ha-104111
	I0729 13:28:48.702963  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:48.702890  993329 retry.go:31] will retry after 576.409222ms: waiting for machine to come up
	I0729 13:28:49.281103  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:28:49.281583  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find current IP address of domain ha-104111-m02 in network mk-ha-104111
	I0729 13:28:49.281613  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:49.281510  993329 retry.go:31] will retry after 759.907393ms: waiting for machine to come up
	I0729 13:28:50.043627  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:28:50.044116  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find current IP address of domain ha-104111-m02 in network mk-ha-104111
	I0729 13:28:50.044147  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:50.044078  993329 retry.go:31] will retry after 919.552774ms: waiting for machine to come up
	I0729 13:28:50.965536  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:28:50.966009  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find current IP address of domain ha-104111-m02 in network mk-ha-104111
	I0729 13:28:50.966054  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:50.965947  993329 retry.go:31] will retry after 856.019302ms: waiting for machine to come up
	I0729 13:28:51.824292  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:28:51.824800  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find current IP address of domain ha-104111-m02 in network mk-ha-104111
	I0729 13:28:51.824833  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:51.824742  993329 retry.go:31] will retry after 1.346244961s: waiting for machine to come up
	I0729 13:28:53.172719  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:28:53.173148  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find current IP address of domain ha-104111-m02 in network mk-ha-104111
	I0729 13:28:53.173179  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:53.173086  993329 retry.go:31] will retry after 1.765358776s: waiting for machine to come up
	I0729 13:28:54.941248  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:28:54.941718  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find current IP address of domain ha-104111-m02 in network mk-ha-104111
	I0729 13:28:54.941744  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:54.941682  993329 retry.go:31] will retry after 1.601671877s: waiting for machine to come up
	I0729 13:28:56.545651  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:28:56.546123  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find current IP address of domain ha-104111-m02 in network mk-ha-104111
	I0729 13:28:56.546181  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:56.546061  993329 retry.go:31] will retry after 2.533098194s: waiting for machine to come up
	I0729 13:28:59.082270  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:28:59.082757  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find current IP address of domain ha-104111-m02 in network mk-ha-104111
	I0729 13:28:59.082790  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:59.082716  993329 retry.go:31] will retry after 2.913309526s: waiting for machine to come up
	I0729 13:29:01.999738  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:02.000103  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find current IP address of domain ha-104111-m02 in network mk-ha-104111
	I0729 13:29:02.000131  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:29:02.000056  993329 retry.go:31] will retry after 3.778820645s: waiting for machine to come up
	I0729 13:29:05.780608  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:05.780979  992950 main.go:141] libmachine: (ha-104111-m02) Found IP for machine: 192.168.39.140
	I0729 13:29:05.780999  992950 main.go:141] libmachine: (ha-104111-m02) Reserving static IP address...
	I0729 13:29:05.781010  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has current primary IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:05.781302  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find host DHCP lease matching {name: "ha-104111-m02", mac: "52:54:00:5b:c5:02", ip: "192.168.39.140"} in network mk-ha-104111
	I0729 13:29:05.854419  992950 main.go:141] libmachine: (ha-104111-m02) Reserved static IP address: 192.168.39.140
	I0729 13:29:05.854454  992950 main.go:141] libmachine: (ha-104111-m02) Waiting for SSH to be available...
	I0729 13:29:05.854464  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Getting to WaitForSSH function...
	I0729 13:29:05.857521  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:05.857946  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:05.857978  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:05.858107  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Using SSH client type: external
	I0729 13:29:05.858127  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/id_rsa (-rw-------)
	I0729 13:29:05.858171  992950 main.go:141] libmachine: (ha-104111-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:29:05.858183  992950 main.go:141] libmachine: (ha-104111-m02) DBG | About to run SSH command:
	I0729 13:29:05.858219  992950 main.go:141] libmachine: (ha-104111-m02) DBG | exit 0
	I0729 13:29:05.984234  992950 main.go:141] libmachine: (ha-104111-m02) DBG | SSH cmd err, output: <nil>: 
	I0729 13:29:05.984527  992950 main.go:141] libmachine: (ha-104111-m02) KVM machine creation complete!
	I0729 13:29:05.984865  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetConfigRaw
	I0729 13:29:05.985414  992950 main.go:141] libmachine: (ha-104111-m02) Calling .DriverName
	I0729 13:29:05.985604  992950 main.go:141] libmachine: (ha-104111-m02) Calling .DriverName
	I0729 13:29:05.985738  992950 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 13:29:05.985755  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetState
	I0729 13:29:05.986986  992950 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 13:29:05.987005  992950 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 13:29:05.987014  992950 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 13:29:05.987023  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:29:05.990681  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:05.991066  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:05.991086  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:05.991249  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:29:05.991425  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:05.991583  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:05.991688  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:29:05.991837  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:29:05.992107  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0729 13:29:05.992118  992950 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 13:29:06.100529  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:29:06.100561  992950 main.go:141] libmachine: Detecting the provisioner...
	I0729 13:29:06.100578  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:29:06.103008  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.103401  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:06.103433  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.103611  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:29:06.103805  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:06.103949  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:06.104075  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:29:06.104226  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:29:06.104503  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0729 13:29:06.104520  992950 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 13:29:06.213429  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 13:29:06.213514  992950 main.go:141] libmachine: found compatible host: buildroot
	I0729 13:29:06.213529  992950 main.go:141] libmachine: Provisioning with buildroot...
	I0729 13:29:06.213541  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetMachineName
	I0729 13:29:06.213823  992950 buildroot.go:166] provisioning hostname "ha-104111-m02"
	I0729 13:29:06.213847  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetMachineName
	I0729 13:29:06.214043  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:29:06.216778  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.217150  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:06.217174  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.217350  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:29:06.217542  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:06.217760  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:06.217906  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:29:06.218043  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:29:06.218265  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0729 13:29:06.218286  992950 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-104111-m02 && echo "ha-104111-m02" | sudo tee /etc/hostname
	I0729 13:29:06.340599  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-104111-m02
	
	I0729 13:29:06.340626  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:29:06.343542  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.344070  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:06.344101  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.344290  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:29:06.344550  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:06.344742  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:06.344881  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:29:06.345077  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:29:06.345298  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0729 13:29:06.345321  992950 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-104111-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-104111-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-104111-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:29:06.461425  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:29:06.461462  992950 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 13:29:06.461486  992950 buildroot.go:174] setting up certificates
	I0729 13:29:06.461499  992950 provision.go:84] configureAuth start
	I0729 13:29:06.461512  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetMachineName
	I0729 13:29:06.461879  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetIP
	I0729 13:29:06.465014  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.465418  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:06.465450  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.465662  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:29:06.467921  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.468248  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:06.468284  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.468430  992950 provision.go:143] copyHostCerts
	I0729 13:29:06.468465  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 13:29:06.468501  992950 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 13:29:06.468510  992950 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 13:29:06.468575  992950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 13:29:06.468663  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 13:29:06.468681  992950 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 13:29:06.468687  992950 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 13:29:06.468710  992950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 13:29:06.468803  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 13:29:06.468825  992950 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 13:29:06.468829  992950 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 13:29:06.468853  992950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 13:29:06.468905  992950 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.ha-104111-m02 san=[127.0.0.1 192.168.39.140 ha-104111-m02 localhost minikube]
	I0729 13:29:06.553276  992950 provision.go:177] copyRemoteCerts
	I0729 13:29:06.553338  992950 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:29:06.553366  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:29:06.555888  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.556162  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:06.556193  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.556369  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:29:06.556573  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:06.556758  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:29:06.556905  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/id_rsa Username:docker}
	I0729 13:29:06.642853  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 13:29:06.642954  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:29:06.667139  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 13:29:06.667222  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:29:06.691905  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 13:29:06.691968  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 13:29:06.716659  992950 provision.go:87] duration metric: took 255.146179ms to configureAuth
	I0729 13:29:06.716685  992950 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:29:06.716850  992950 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:29:06.716926  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:29:06.719548  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.719920  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:06.719947  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.720091  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:29:06.720306  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:06.720517  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:06.720679  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:29:06.720883  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:29:06.721105  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0729 13:29:06.721123  992950 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:29:06.993658  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:29:06.993689  992950 main.go:141] libmachine: Checking connection to Docker...
	I0729 13:29:06.993697  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetURL
	I0729 13:29:06.995058  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Using libvirt version 6000000
	I0729 13:29:06.997040  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.997448  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:06.997476  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.997663  992950 main.go:141] libmachine: Docker is up and running!
	I0729 13:29:06.997677  992950 main.go:141] libmachine: Reticulating splines...
	I0729 13:29:06.997684  992950 client.go:171] duration metric: took 19.932661185s to LocalClient.Create
	I0729 13:29:06.997708  992950 start.go:167] duration metric: took 19.932717613s to libmachine.API.Create "ha-104111"
	I0729 13:29:06.997720  992950 start.go:293] postStartSetup for "ha-104111-m02" (driver="kvm2")
	I0729 13:29:06.997729  992950 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:29:06.997755  992950 main.go:141] libmachine: (ha-104111-m02) Calling .DriverName
	I0729 13:29:06.998006  992950 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:29:06.998031  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:29:06.999979  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:07.000356  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:07.000380  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:07.000539  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:29:07.000736  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:07.000892  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:29:07.001083  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/id_rsa Username:docker}
	I0729 13:29:07.088492  992950 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:29:07.093011  992950 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:29:07.093043  992950 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 13:29:07.093122  992950 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 13:29:07.093220  992950 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 13:29:07.093234  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> /etc/ssl/certs/9820462.pem
	I0729 13:29:07.093321  992950 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:29:07.104263  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 13:29:07.130043  992950 start.go:296] duration metric: took 132.308837ms for postStartSetup
	I0729 13:29:07.130102  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetConfigRaw
	I0729 13:29:07.130836  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetIP
	I0729 13:29:07.133474  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:07.133858  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:07.133885  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:07.134118  992950 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/config.json ...
	I0729 13:29:07.134330  992950 start.go:128] duration metric: took 20.087167452s to createHost
	I0729 13:29:07.134356  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:29:07.136755  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:07.137085  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:07.137110  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:07.137261  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:29:07.137506  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:07.137677  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:07.137825  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:29:07.138015  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:29:07.138220  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0729 13:29:07.138232  992950 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:29:07.245283  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722259747.216982635
	
	I0729 13:29:07.245313  992950 fix.go:216] guest clock: 1722259747.216982635
	I0729 13:29:07.245323  992950 fix.go:229] Guest: 2024-07-29 13:29:07.216982635 +0000 UTC Remote: 2024-07-29 13:29:07.134343214 +0000 UTC m=+76.380347522 (delta=82.639421ms)
	I0729 13:29:07.245346  992950 fix.go:200] guest clock delta is within tolerance: 82.639421ms
	I0729 13:29:07.245354  992950 start.go:83] releasing machines lock for "ha-104111-m02", held for 20.198273996s
	I0729 13:29:07.245378  992950 main.go:141] libmachine: (ha-104111-m02) Calling .DriverName
	I0729 13:29:07.245718  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetIP
	I0729 13:29:07.248734  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:07.249103  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:07.249128  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:07.251648  992950 out.go:177] * Found network options:
	I0729 13:29:07.253064  992950 out.go:177]   - NO_PROXY=192.168.39.120
	W0729 13:29:07.254398  992950 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 13:29:07.254435  992950 main.go:141] libmachine: (ha-104111-m02) Calling .DriverName
	I0729 13:29:07.254959  992950 main.go:141] libmachine: (ha-104111-m02) Calling .DriverName
	I0729 13:29:07.255154  992950 main.go:141] libmachine: (ha-104111-m02) Calling .DriverName
	I0729 13:29:07.255272  992950 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:29:07.255317  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	W0729 13:29:07.255345  992950 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 13:29:07.255418  992950 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:29:07.255435  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:29:07.257934  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:07.258162  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:07.258300  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:07.258327  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:07.258459  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:29:07.258526  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:07.258550  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:07.258644  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:07.258731  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:29:07.258803  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:29:07.258863  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:07.258922  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/id_rsa Username:docker}
	I0729 13:29:07.258956  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:29:07.259119  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/id_rsa Username:docker}
	I0729 13:29:07.491891  992950 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:29:07.498637  992950 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:29:07.498729  992950 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:29:07.515737  992950 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:29:07.515786  992950 start.go:495] detecting cgroup driver to use...
	I0729 13:29:07.515853  992950 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:29:07.536462  992950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:29:07.550741  992950 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:29:07.550824  992950 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:29:07.565215  992950 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:29:07.578745  992950 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:29:07.689384  992950 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:29:07.830050  992950 docker.go:233] disabling docker service ...
	I0729 13:29:07.830141  992950 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:29:07.844716  992950 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:29:07.857689  992950 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:29:07.986082  992950 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:29:08.114810  992950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:29:08.128510  992950 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:29:08.147463  992950 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:29:08.147531  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:29:08.159873  992950 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:29:08.159945  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:29:08.170990  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:29:08.181899  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:29:08.192355  992950 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:29:08.203362  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:29:08.214072  992950 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:29:08.230996  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:29:08.241657  992950 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:29:08.251180  992950 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:29:08.251241  992950 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:29:08.264803  992950 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:29:08.274843  992950 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:29:08.393478  992950 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:29:08.527762  992950 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:29:08.527851  992950 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:29:08.532685  992950 start.go:563] Will wait 60s for crictl version
	I0729 13:29:08.532744  992950 ssh_runner.go:195] Run: which crictl
	I0729 13:29:08.536759  992950 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:29:08.574605  992950 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:29:08.574705  992950 ssh_runner.go:195] Run: crio --version
	I0729 13:29:08.602122  992950 ssh_runner.go:195] Run: crio --version
	I0729 13:29:08.631758  992950 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:29:08.633116  992950 out.go:177]   - env NO_PROXY=192.168.39.120
	I0729 13:29:08.634529  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetIP
	I0729 13:29:08.637259  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:08.637577  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:08.637610  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:08.637821  992950 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 13:29:08.642064  992950 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:29:08.654465  992950 mustload.go:65] Loading cluster: ha-104111
	I0729 13:29:08.654680  992950 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:29:08.654950  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:29:08.654991  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:29:08.669860  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42007
	I0729 13:29:08.670308  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:29:08.670780  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:29:08.670812  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:29:08.671178  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:29:08.671394  992950 main.go:141] libmachine: (ha-104111) Calling .GetState
	I0729 13:29:08.672965  992950 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:29:08.673256  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:29:08.673290  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:29:08.687865  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46771
	I0729 13:29:08.688269  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:29:08.688766  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:29:08.688790  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:29:08.689117  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:29:08.689306  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:29:08.689469  992950 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111 for IP: 192.168.39.140
	I0729 13:29:08.689478  992950 certs.go:194] generating shared ca certs ...
	I0729 13:29:08.689498  992950 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:29:08.689650  992950 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 13:29:08.689701  992950 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 13:29:08.689714  992950 certs.go:256] generating profile certs ...
	I0729 13:29:08.689814  992950 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.key
	I0729 13:29:08.689847  992950 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.3ed82b7a
	I0729 13:29:08.689867  992950 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.3ed82b7a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.120 192.168.39.140 192.168.39.254]
	I0729 13:29:08.893797  992950 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.3ed82b7a ...
	I0729 13:29:08.893826  992950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.3ed82b7a: {Name:mk739a7714392be88871b57878d3f430f8a41e53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:29:08.894019  992950 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.3ed82b7a ...
	I0729 13:29:08.894038  992950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.3ed82b7a: {Name:mkc92c21f4500c5c2f144d8589021c12a3ab62a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:29:08.894140  992950 certs.go:381] copying /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.3ed82b7a -> /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt
	I0729 13:29:08.894313  992950 certs.go:385] copying /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.3ed82b7a -> /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key
	I0729 13:29:08.894497  992950 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key
	I0729 13:29:08.894516  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 13:29:08.894534  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 13:29:08.894558  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 13:29:08.894578  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 13:29:08.894594  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 13:29:08.894608  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 13:29:08.894625  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 13:29:08.894641  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 13:29:08.894707  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 13:29:08.894753  992950 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 13:29:08.894766  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:29:08.894799  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:29:08.894830  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:29:08.894857  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 13:29:08.894914  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 13:29:08.894955  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem -> /usr/share/ca-certificates/982046.pem
	I0729 13:29:08.894971  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> /usr/share/ca-certificates/9820462.pem
	I0729 13:29:08.894988  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:29:08.895030  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:29:08.897907  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:29:08.898331  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:29:08.898354  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:29:08.898587  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:29:08.898811  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:29:08.899002  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:29:08.899153  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:29:08.976740  992950 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 13:29:08.982083  992950 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 13:29:08.993686  992950 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 13:29:08.998010  992950 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0729 13:29:09.008443  992950 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 13:29:09.012522  992950 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 13:29:09.023131  992950 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 13:29:09.027483  992950 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0729 13:29:09.038049  992950 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 13:29:09.042195  992950 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 13:29:09.052369  992950 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 13:29:09.057855  992950 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0729 13:29:09.069736  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:29:09.098073  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:29:09.125847  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:29:09.152988  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 13:29:09.177037  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 13:29:09.200149  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 13:29:09.223523  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:29:09.247715  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 13:29:09.273947  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 13:29:09.297427  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 13:29:09.321118  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:29:09.344732  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 13:29:09.361476  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0729 13:29:09.378190  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 13:29:09.395388  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0729 13:29:09.411825  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 13:29:09.428546  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0729 13:29:09.445345  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 13:29:09.461913  992950 ssh_runner.go:195] Run: openssl version
	I0729 13:29:09.468134  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 13:29:09.479322  992950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 13:29:09.483952  992950 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 13:29:09.484007  992950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 13:29:09.490251  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:29:09.501522  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:29:09.512296  992950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:29:09.516983  992950 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:29:09.517048  992950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:29:09.522743  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:29:09.533299  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 13:29:09.544513  992950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 13:29:09.549432  992950 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 13:29:09.549491  992950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 13:29:09.555587  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 13:29:09.566942  992950 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:29:09.571255  992950 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 13:29:09.571309  992950 kubeadm.go:934] updating node {m02 192.168.39.140 8443 v1.30.3 crio true true} ...
	I0729 13:29:09.571404  992950 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-104111-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-104111 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:29:09.571442  992950 kube-vip.go:115] generating kube-vip config ...
	I0729 13:29:09.571491  992950 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 13:29:09.590660  992950 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 13:29:09.590741  992950 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 13:29:09.590798  992950 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 13:29:09.601992  992950 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 13:29:09.602060  992950 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 13:29:09.612289  992950 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 13:29:09.612333  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 13:29:09.612405  992950 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 13:29:09.612422  992950 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0729 13:29:09.612435  992950 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0729 13:29:09.617589  992950 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 13:29:09.617614  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 13:29:10.236509  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 13:29:10.236608  992950 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 13:29:10.241897  992950 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 13:29:10.241938  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 13:29:12.216934  992950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:29:12.231497  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 13:29:12.231600  992950 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 13:29:12.236175  992950 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 13:29:12.236353  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 13:29:12.630248  992950 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 13:29:12.640712  992950 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0729 13:29:12.658075  992950 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:29:12.675634  992950 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 13:29:12.692508  992950 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 13:29:12.696397  992950 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:29:12.709569  992950 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:29:12.830472  992950 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:29:12.847286  992950 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:29:12.847749  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:29:12.847800  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:29:12.863991  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39045
	I0729 13:29:12.864480  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:29:12.864993  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:29:12.865019  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:29:12.865320  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:29:12.865554  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:29:12.865718  992950 start.go:317] joinCluster: &{Name:ha-104111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-104111 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:29:12.865854  992950 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 13:29:12.865883  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:29:12.868629  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:29:12.869103  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:29:12.869132  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:29:12.869217  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:29:12.869380  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:29:12.869521  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:29:12.869684  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:29:13.025199  992950 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:29:13.025267  992950 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token g4sw7y.9eyyh608n7bqq2vd --discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-104111-m02 --control-plane --apiserver-advertise-address=192.168.39.140 --apiserver-bind-port=8443"
	I0729 13:29:35.403263  992950 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token g4sw7y.9eyyh608n7bqq2vd --discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-104111-m02 --control-plane --apiserver-advertise-address=192.168.39.140 --apiserver-bind-port=8443": (22.377961362s)
	I0729 13:29:35.403309  992950 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 13:29:35.891226  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-104111-m02 minikube.k8s.io/updated_at=2024_07_29T13_29_35_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411 minikube.k8s.io/name=ha-104111 minikube.k8s.io/primary=false
	I0729 13:29:36.032621  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-104111-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 13:29:36.132525  992950 start.go:319] duration metric: took 23.266803925s to joinCluster
	I0729 13:29:36.132625  992950 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:29:36.132954  992950 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:29:36.134359  992950 out.go:177] * Verifying Kubernetes components...
	I0729 13:29:36.135951  992950 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:29:36.386814  992950 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:29:36.440763  992950 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 13:29:36.441143  992950 kapi.go:59] client config for ha-104111: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.crt", KeyFile:"/home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.key", CAFile:"/home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 13:29:36.441215  992950 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.120:8443
	I0729 13:29:36.441439  992950 node_ready.go:35] waiting up to 6m0s for node "ha-104111-m02" to be "Ready" ...
	I0729 13:29:36.441538  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:36.441548  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:36.441555  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:36.441558  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:36.453383  992950 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0729 13:29:36.942126  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:36.942148  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:36.942156  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:36.942160  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:36.946804  992950 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 13:29:37.442473  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:37.442502  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:37.442518  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:37.442524  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:37.449273  992950 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 13:29:37.941680  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:37.941707  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:37.941718  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:37.941723  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:37.946410  992950 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 13:29:38.441860  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:38.441886  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:38.441893  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:38.441896  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:38.445658  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:38.446546  992950 node_ready.go:53] node "ha-104111-m02" has status "Ready":"False"
	I0729 13:29:38.941643  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:38.941667  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:38.941676  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:38.941680  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:38.945097  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:39.441830  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:39.441861  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:39.441873  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:39.441880  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:39.445086  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:39.942088  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:39.942113  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:39.942126  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:39.942131  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:39.945529  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:40.442556  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:40.442589  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:40.442601  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:40.442608  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:40.447373  992950 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 13:29:40.448150  992950 node_ready.go:53] node "ha-104111-m02" has status "Ready":"False"
	I0729 13:29:40.942483  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:40.942508  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:40.942527  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:40.942531  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:40.945952  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:41.441875  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:41.441896  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:41.441904  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:41.441908  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:41.447347  992950 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 13:29:41.942246  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:41.942269  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:41.942277  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:41.942282  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:41.945475  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:42.441733  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:42.441757  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:42.441766  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:42.441771  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:42.445901  992950 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 13:29:42.942336  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:42.942361  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:42.942373  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:42.942380  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:42.947857  992950 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 13:29:42.948451  992950 node_ready.go:53] node "ha-104111-m02" has status "Ready":"False"
	I0729 13:29:43.441690  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:43.441710  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:43.441719  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:43.441723  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:43.445013  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:43.942079  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:43.942105  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:43.942113  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:43.942117  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:43.945621  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:44.442542  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:44.442566  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:44.442575  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:44.442579  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:44.445568  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:29:44.941636  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:44.941659  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:44.941668  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:44.941672  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:44.944882  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:45.441782  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:45.441813  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:45.441830  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:45.441835  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:45.445508  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:45.446062  992950 node_ready.go:53] node "ha-104111-m02" has status "Ready":"False"
	I0729 13:29:45.942294  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:45.942321  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:45.942328  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:45.942333  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:45.945563  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:46.441709  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:46.441732  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:46.441742  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:46.441748  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:46.444966  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:46.941674  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:46.941697  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:46.941705  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:46.941709  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:46.945643  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:47.441740  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:47.441766  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:47.441777  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:47.441788  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:47.444932  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:47.942210  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:47.942233  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:47.942242  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:47.942246  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:47.945147  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:29:47.945815  992950 node_ready.go:53] node "ha-104111-m02" has status "Ready":"False"
	I0729 13:29:48.442060  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:48.442082  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:48.442091  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:48.442095  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:48.445112  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:29:48.941743  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:48.941770  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:48.941777  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:48.941781  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:48.945094  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:49.442394  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:49.442461  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:49.442484  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:49.442493  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:49.445335  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:29:49.942395  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:49.942421  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:49.942431  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:49.942436  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:49.945591  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:49.946166  992950 node_ready.go:53] node "ha-104111-m02" has status "Ready":"False"
	I0729 13:29:50.442660  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:50.442688  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:50.442699  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:50.442711  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:50.446215  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:50.942195  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:50.942218  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:50.942227  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:50.942231  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:50.945931  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:51.442429  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:51.442450  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:51.442459  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:51.442463  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:51.445117  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:29:51.941985  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:51.942010  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:51.942019  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:51.942023  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:51.945363  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:51.946002  992950 node_ready.go:49] node "ha-104111-m02" has status "Ready":"True"
	I0729 13:29:51.946028  992950 node_ready.go:38] duration metric: took 15.504572235s for node "ha-104111-m02" to be "Ready" ...
	I0729 13:29:51.946040  992950 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:29:51.946115  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods
	I0729 13:29:51.946126  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:51.946136  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:51.946141  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:51.950415  992950 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 13:29:51.957410  992950 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9jrnl" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:51.957508  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9jrnl
	I0729 13:29:51.957517  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:51.957536  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:51.957546  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:51.960363  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:29:51.961016  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:29:51.961032  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:51.961042  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:51.961049  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:51.963544  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:29:51.964082  992950 pod_ready.go:92] pod "coredns-7db6d8ff4d-9jrnl" in "kube-system" namespace has status "Ready":"True"
	I0729 13:29:51.964103  992950 pod_ready.go:81] duration metric: took 6.665172ms for pod "coredns-7db6d8ff4d-9jrnl" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:51.964113  992950 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gcf7q" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:51.964182  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gcf7q
	I0729 13:29:51.964192  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:51.964201  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:51.964210  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:51.967943  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:51.968678  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:29:51.968696  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:51.968706  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:51.968711  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:51.977108  992950 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0729 13:29:51.977654  992950 pod_ready.go:92] pod "coredns-7db6d8ff4d-gcf7q" in "kube-system" namespace has status "Ready":"True"
	I0729 13:29:51.977675  992950 pod_ready.go:81] duration metric: took 13.554914ms for pod "coredns-7db6d8ff4d-gcf7q" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:51.977684  992950 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:51.977741  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/etcd-ha-104111
	I0729 13:29:51.977748  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:51.977755  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:51.977761  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:51.980192  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:29:51.980826  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:29:51.980845  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:51.980854  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:51.980860  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:51.983293  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:29:51.983882  992950 pod_ready.go:92] pod "etcd-ha-104111" in "kube-system" namespace has status "Ready":"True"
	I0729 13:29:51.983897  992950 pod_ready.go:81] duration metric: took 6.205001ms for pod "etcd-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:51.983907  992950 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:51.983954  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/etcd-ha-104111-m02
	I0729 13:29:51.983960  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:51.983967  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:51.983973  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:51.986286  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:29:51.986880  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:51.986893  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:51.986900  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:51.986903  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:51.989049  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:29:51.989486  992950 pod_ready.go:92] pod "etcd-ha-104111-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 13:29:51.989502  992950 pod_ready.go:81] duration metric: took 5.587819ms for pod "etcd-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:51.989515  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:52.143040  992950 request.go:629] Waited for 153.461561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-104111
	I0729 13:29:52.143122  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-104111
	I0729 13:29:52.143127  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:52.143134  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:52.143140  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:52.146310  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:52.342451  992950 request.go:629] Waited for 195.396971ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:29:52.342513  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:29:52.342519  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:52.342526  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:52.342530  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:52.346220  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:52.347027  992950 pod_ready.go:92] pod "kube-apiserver-ha-104111" in "kube-system" namespace has status "Ready":"True"
	I0729 13:29:52.347048  992950 pod_ready.go:81] duration metric: took 357.523914ms for pod "kube-apiserver-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:52.347058  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:52.542203  992950 request.go:629] Waited for 195.05716ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-104111-m02
	I0729 13:29:52.542272  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-104111-m02
	I0729 13:29:52.542278  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:52.542286  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:52.542291  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:52.545662  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:52.742837  992950 request.go:629] Waited for 196.367811ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:52.742919  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:52.742924  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:52.742932  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:52.742937  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:52.746222  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:52.746921  992950 pod_ready.go:92] pod "kube-apiserver-ha-104111-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 13:29:52.746942  992950 pod_ready.go:81] duration metric: took 399.878396ms for pod "kube-apiserver-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:52.746956  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:52.942458  992950 request.go:629] Waited for 195.422762ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-104111
	I0729 13:29:52.942525  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-104111
	I0729 13:29:52.942529  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:52.942537  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:52.942542  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:52.945928  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:53.142097  992950 request.go:629] Waited for 195.306368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:29:53.142171  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:29:53.142176  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:53.142183  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:53.142189  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:53.145312  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:53.146344  992950 pod_ready.go:92] pod "kube-controller-manager-ha-104111" in "kube-system" namespace has status "Ready":"True"
	I0729 13:29:53.146368  992950 pod_ready.go:81] duration metric: took 399.402588ms for pod "kube-controller-manager-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:53.146382  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:53.342533  992950 request.go:629] Waited for 196.040014ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-104111-m02
	I0729 13:29:53.342605  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-104111-m02
	I0729 13:29:53.342611  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:53.342619  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:53.342623  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:53.346259  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:53.542404  992950 request.go:629] Waited for 195.381486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:53.542489  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:53.542502  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:53.542515  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:53.542522  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:53.545914  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:53.546477  992950 pod_ready.go:92] pod "kube-controller-manager-ha-104111-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 13:29:53.546500  992950 pod_ready.go:81] duration metric: took 400.109056ms for pod "kube-controller-manager-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:53.546514  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5dnvv" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:53.742509  992950 request.go:629] Waited for 195.89347ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5dnvv
	I0729 13:29:53.742598  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5dnvv
	I0729 13:29:53.742607  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:53.742619  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:53.742628  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:53.745882  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:53.942973  992950 request.go:629] Waited for 196.370167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:53.943055  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:53.943060  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:53.943067  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:53.943071  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:53.946517  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:53.947342  992950 pod_ready.go:92] pod "kube-proxy-5dnvv" in "kube-system" namespace has status "Ready":"True"
	I0729 13:29:53.947361  992950 pod_ready.go:81] duration metric: took 400.840279ms for pod "kube-proxy-5dnvv" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:53.947370  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n6kkf" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:54.142565  992950 request.go:629] Waited for 195.109125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n6kkf
	I0729 13:29:54.142651  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n6kkf
	I0729 13:29:54.142655  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:54.142664  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:54.142668  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:54.145792  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:54.342884  992950 request.go:629] Waited for 196.379896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:29:54.342946  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:29:54.342951  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:54.342958  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:54.342963  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:54.346249  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:54.346731  992950 pod_ready.go:92] pod "kube-proxy-n6kkf" in "kube-system" namespace has status "Ready":"True"
	I0729 13:29:54.346749  992950 pod_ready.go:81] duration metric: took 399.373512ms for pod "kube-proxy-n6kkf" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:54.346757  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:54.542900  992950 request.go:629] Waited for 196.035828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-104111
	I0729 13:29:54.542972  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-104111
	I0729 13:29:54.542981  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:54.542992  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:54.543002  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:54.546363  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:54.742656  992950 request.go:629] Waited for 195.385036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:29:54.742740  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:29:54.742747  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:54.742759  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:54.742765  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:54.746133  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:54.746617  992950 pod_ready.go:92] pod "kube-scheduler-ha-104111" in "kube-system" namespace has status "Ready":"True"
	I0729 13:29:54.746640  992950 pod_ready.go:81] duration metric: took 399.87386ms for pod "kube-scheduler-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:54.746651  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:54.942799  992950 request.go:629] Waited for 196.059566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-104111-m02
	I0729 13:29:54.942866  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-104111-m02
	I0729 13:29:54.942871  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:54.942880  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:54.942884  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:54.945978  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:55.142826  992950 request.go:629] Waited for 196.3718ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:55.142906  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:55.142911  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:55.142919  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:55.142933  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:55.146281  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:55.146738  992950 pod_ready.go:92] pod "kube-scheduler-ha-104111-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 13:29:55.146757  992950 pod_ready.go:81] duration metric: took 400.09936ms for pod "kube-scheduler-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:55.146767  992950 pod_ready.go:38] duration metric: took 3.200713503s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:29:55.146784  992950 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:29:55.146835  992950 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:29:55.162380  992950 api_server.go:72] duration metric: took 19.029713385s to wait for apiserver process to appear ...
	I0729 13:29:55.162408  992950 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:29:55.162429  992950 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 13:29:55.167569  992950 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I0729 13:29:55.167634  992950 round_trippers.go:463] GET https://192.168.39.120:8443/version
	I0729 13:29:55.167639  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:55.167646  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:55.167652  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:55.168457  992950 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 13:29:55.168565  992950 api_server.go:141] control plane version: v1.30.3
	I0729 13:29:55.168583  992950 api_server.go:131] duration metric: took 6.169505ms to wait for apiserver health ...
	I0729 13:29:55.168591  992950 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:29:55.343027  992950 request.go:629] Waited for 174.355466ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods
	I0729 13:29:55.343108  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods
	I0729 13:29:55.343114  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:55.343122  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:55.343126  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:55.348390  992950 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 13:29:55.352540  992950 system_pods.go:59] 17 kube-system pods found
	I0729 13:29:55.352581  992950 system_pods.go:61] "coredns-7db6d8ff4d-9jrnl" [0453ed97-efb4-41c1-8bfb-e7e004e618e0] Running
	I0729 13:29:55.352587  992950 system_pods.go:61] "coredns-7db6d8ff4d-gcf7q" [196981ba-ed16-427c-ae8b-9b7e8ff36be2] Running
	I0729 13:29:55.352592  992950 system_pods.go:61] "etcd-ha-104111" [309561db-8f30-4b42-8252-e02d9a26ec2e] Running
	I0729 13:29:55.352595  992950 system_pods.go:61] "etcd-ha-104111-m02" [4f09acca-1baa-4eba-8ef4-eb3e2b64512c] Running
	I0729 13:29:55.352598  992950 system_pods.go:61] "kindnet-9phpm" [60e9c45f-5176-492e-90c7-49b0201afe1e] Running
	I0729 13:29:55.352601  992950 system_pods.go:61] "kindnet-njndz" [a0477f9b-b1ff-49d8-8f39-21ffb84377e9] Running
	I0729 13:29:55.352605  992950 system_pods.go:61] "kube-apiserver-ha-104111" [d546ecd4-9bdb-4e41-9e4a-74d0c81359d5] Running
	I0729 13:29:55.352608  992950 system_pods.go:61] "kube-apiserver-ha-104111-m02" [70bd608c-3ebe-4306-8ec9-61c254ca5261] Running
	I0729 13:29:55.352611  992950 system_pods.go:61] "kube-controller-manager-ha-104111" [03be8232-ff90-43e1-87e0-5d61aeaa7c96] Running
	I0729 13:29:55.352615  992950 system_pods.go:61] "kube-controller-manager-ha-104111-m02" [d2ca4758-3c38-4655-8bfb-b5a64b0b6bca] Running
	I0729 13:29:55.352618  992950 system_pods.go:61] "kube-proxy-5dnvv" [2fb3553e-b114-4528-bf9a-1765356bb2a4] Running
	I0729 13:29:55.352623  992950 system_pods.go:61] "kube-proxy-n6kkf" [4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1] Running
	I0729 13:29:55.352627  992950 system_pods.go:61] "kube-scheduler-ha-104111" [3236068e-5891-4cb7-aa91-8aaf93260f3a] Running
	I0729 13:29:55.352630  992950 system_pods.go:61] "kube-scheduler-ha-104111-m02" [01a2d6c2-859d-44e8-9d53-0d257b4b4a1c] Running
	I0729 13:29:55.352633  992950 system_pods.go:61] "kube-vip-ha-104111" [edfeb506-2884-4406-92cf-c35fce56d7c4] Running
	I0729 13:29:55.352636  992950 system_pods.go:61] "kube-vip-ha-104111-m02" [bcc970d3-1717-4971-8216-7526fe2028ba] Running
	I0729 13:29:55.352639  992950 system_pods.go:61] "storage-provisioner" [b61cc52e-771b-484a-99d6-8963665cb1e8] Running
	I0729 13:29:55.352648  992950 system_pods.go:74] duration metric: took 184.048689ms to wait for pod list to return data ...
	I0729 13:29:55.352659  992950 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:29:55.542048  992950 request.go:629] Waited for 189.288949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/default/serviceaccounts
	I0729 13:29:55.542120  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/default/serviceaccounts
	I0729 13:29:55.542127  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:55.542137  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:55.542141  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:55.545898  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:55.546117  992950 default_sa.go:45] found service account: "default"
	I0729 13:29:55.546131  992950 default_sa.go:55] duration metric: took 193.466691ms for default service account to be created ...
	I0729 13:29:55.546140  992950 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 13:29:55.742596  992950 request.go:629] Waited for 196.370929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods
	I0729 13:29:55.742659  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods
	I0729 13:29:55.742664  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:55.742672  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:55.742676  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:55.747839  992950 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 13:29:55.753916  992950 system_pods.go:86] 17 kube-system pods found
	I0729 13:29:55.753944  992950 system_pods.go:89] "coredns-7db6d8ff4d-9jrnl" [0453ed97-efb4-41c1-8bfb-e7e004e618e0] Running
	I0729 13:29:55.753950  992950 system_pods.go:89] "coredns-7db6d8ff4d-gcf7q" [196981ba-ed16-427c-ae8b-9b7e8ff36be2] Running
	I0729 13:29:55.753954  992950 system_pods.go:89] "etcd-ha-104111" [309561db-8f30-4b42-8252-e02d9a26ec2e] Running
	I0729 13:29:55.753959  992950 system_pods.go:89] "etcd-ha-104111-m02" [4f09acca-1baa-4eba-8ef4-eb3e2b64512c] Running
	I0729 13:29:55.753962  992950 system_pods.go:89] "kindnet-9phpm" [60e9c45f-5176-492e-90c7-49b0201afe1e] Running
	I0729 13:29:55.753967  992950 system_pods.go:89] "kindnet-njndz" [a0477f9b-b1ff-49d8-8f39-21ffb84377e9] Running
	I0729 13:29:55.753970  992950 system_pods.go:89] "kube-apiserver-ha-104111" [d546ecd4-9bdb-4e41-9e4a-74d0c81359d5] Running
	I0729 13:29:55.753974  992950 system_pods.go:89] "kube-apiserver-ha-104111-m02" [70bd608c-3ebe-4306-8ec9-61c254ca5261] Running
	I0729 13:29:55.753979  992950 system_pods.go:89] "kube-controller-manager-ha-104111" [03be8232-ff90-43e1-87e0-5d61aeaa7c96] Running
	I0729 13:29:55.753983  992950 system_pods.go:89] "kube-controller-manager-ha-104111-m02" [d2ca4758-3c38-4655-8bfb-b5a64b0b6bca] Running
	I0729 13:29:55.753987  992950 system_pods.go:89] "kube-proxy-5dnvv" [2fb3553e-b114-4528-bf9a-1765356bb2a4] Running
	I0729 13:29:55.753990  992950 system_pods.go:89] "kube-proxy-n6kkf" [4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1] Running
	I0729 13:29:55.753994  992950 system_pods.go:89] "kube-scheduler-ha-104111" [3236068e-5891-4cb7-aa91-8aaf93260f3a] Running
	I0729 13:29:55.753998  992950 system_pods.go:89] "kube-scheduler-ha-104111-m02" [01a2d6c2-859d-44e8-9d53-0d257b4b4a1c] Running
	I0729 13:29:55.754003  992950 system_pods.go:89] "kube-vip-ha-104111" [edfeb506-2884-4406-92cf-c35fce56d7c4] Running
	I0729 13:29:55.754008  992950 system_pods.go:89] "kube-vip-ha-104111-m02" [bcc970d3-1717-4971-8216-7526fe2028ba] Running
	I0729 13:29:55.754013  992950 system_pods.go:89] "storage-provisioner" [b61cc52e-771b-484a-99d6-8963665cb1e8] Running
	I0729 13:29:55.754020  992950 system_pods.go:126] duration metric: took 207.873507ms to wait for k8s-apps to be running ...
	I0729 13:29:55.754031  992950 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 13:29:55.754077  992950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:29:55.771125  992950 system_svc.go:56] duration metric: took 17.080254ms WaitForService to wait for kubelet
	I0729 13:29:55.771170  992950 kubeadm.go:582] duration metric: took 19.638499805s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:29:55.771200  992950 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:29:55.942985  992950 request.go:629] Waited for 171.654425ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes
	I0729 13:29:55.943057  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes
	I0729 13:29:55.943062  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:55.943071  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:55.943078  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:55.946554  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:55.947499  992950 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:29:55.947526  992950 node_conditions.go:123] node cpu capacity is 2
	I0729 13:29:55.947539  992950 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:29:55.947608  992950 node_conditions.go:123] node cpu capacity is 2
	I0729 13:29:55.947622  992950 node_conditions.go:105] duration metric: took 176.415483ms to run NodePressure ...
	I0729 13:29:55.947637  992950 start.go:241] waiting for startup goroutines ...
	I0729 13:29:55.947671  992950 start.go:255] writing updated cluster config ...
	I0729 13:29:55.949846  992950 out.go:177] 
	I0729 13:29:55.951228  992950 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:29:55.951315  992950 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/config.json ...
	I0729 13:29:55.952728  992950 out.go:177] * Starting "ha-104111-m03" control-plane node in "ha-104111" cluster
	I0729 13:29:55.953806  992950 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:29:55.953826  992950 cache.go:56] Caching tarball of preloaded images
	I0729 13:29:55.953933  992950 preload.go:172] Found /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:29:55.953945  992950 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 13:29:55.954027  992950 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/config.json ...
	I0729 13:29:55.954183  992950 start.go:360] acquireMachinesLock for ha-104111-m03: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:29:55.954225  992950 start.go:364] duration metric: took 22.497µs to acquireMachinesLock for "ha-104111-m03"
	I0729 13:29:55.954243  992950 start.go:93] Provisioning new machine with config: &{Name:ha-104111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-104111 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:29:55.954333  992950 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0729 13:29:55.955676  992950 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 13:29:55.955757  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:29:55.955801  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:29:55.971742  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34321
	I0729 13:29:55.972185  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:29:55.972719  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:29:55.972745  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:29:55.973162  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:29:55.973377  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetMachineName
	I0729 13:29:55.973607  992950 main.go:141] libmachine: (ha-104111-m03) Calling .DriverName
	I0729 13:29:55.973773  992950 start.go:159] libmachine.API.Create for "ha-104111" (driver="kvm2")
	I0729 13:29:55.973804  992950 client.go:168] LocalClient.Create starting
	I0729 13:29:55.973839  992950 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem
	I0729 13:29:55.973879  992950 main.go:141] libmachine: Decoding PEM data...
	I0729 13:29:55.973897  992950 main.go:141] libmachine: Parsing certificate...
	I0729 13:29:55.973971  992950 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem
	I0729 13:29:55.973997  992950 main.go:141] libmachine: Decoding PEM data...
	I0729 13:29:55.974013  992950 main.go:141] libmachine: Parsing certificate...
	I0729 13:29:55.974038  992950 main.go:141] libmachine: Running pre-create checks...
	I0729 13:29:55.974050  992950 main.go:141] libmachine: (ha-104111-m03) Calling .PreCreateCheck
	I0729 13:29:55.974238  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetConfigRaw
	I0729 13:29:55.974660  992950 main.go:141] libmachine: Creating machine...
	I0729 13:29:55.974675  992950 main.go:141] libmachine: (ha-104111-m03) Calling .Create
	I0729 13:29:55.974838  992950 main.go:141] libmachine: (ha-104111-m03) Creating KVM machine...
	I0729 13:29:55.976239  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found existing default KVM network
	I0729 13:29:55.976310  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found existing private KVM network mk-ha-104111
	I0729 13:29:55.976508  992950 main.go:141] libmachine: (ha-104111-m03) Setting up store path in /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03 ...
	I0729 13:29:55.976540  992950 main.go:141] libmachine: (ha-104111-m03) Building disk image from file:///home/jenkins/minikube-integration/19338-974764/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 13:29:55.976594  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:29:55.976481  993689 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 13:29:55.976699  992950 main.go:141] libmachine: (ha-104111-m03) Downloading /home/jenkins/minikube-integration/19338-974764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19338-974764/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 13:29:56.253824  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:29:56.253690  993689 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/id_rsa...
	I0729 13:29:56.448014  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:29:56.447897  993689 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/ha-104111-m03.rawdisk...
	I0729 13:29:56.448040  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Writing magic tar header
	I0729 13:29:56.448056  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Writing SSH key tar header
	I0729 13:29:56.448064  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:29:56.448041  993689 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03 ...
	I0729 13:29:56.448209  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03
	I0729 13:29:56.448234  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube/machines
	I0729 13:29:56.448248  992950 main.go:141] libmachine: (ha-104111-m03) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03 (perms=drwx------)
	I0729 13:29:56.448269  992950 main.go:141] libmachine: (ha-104111-m03) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube/machines (perms=drwxr-xr-x)
	I0729 13:29:56.448284  992950 main.go:141] libmachine: (ha-104111-m03) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube (perms=drwxr-xr-x)
	I0729 13:29:56.448300  992950 main.go:141] libmachine: (ha-104111-m03) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764 (perms=drwxrwxr-x)
	I0729 13:29:56.448314  992950 main.go:141] libmachine: (ha-104111-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 13:29:56.448328  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 13:29:56.448343  992950 main.go:141] libmachine: (ha-104111-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 13:29:56.448355  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764
	I0729 13:29:56.448370  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 13:29:56.448381  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Checking permissions on dir: /home/jenkins
	I0729 13:29:56.448394  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Checking permissions on dir: /home
	I0729 13:29:56.448405  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Skipping /home - not owner
	I0729 13:29:56.448432  992950 main.go:141] libmachine: (ha-104111-m03) Creating domain...
	I0729 13:29:56.449331  992950 main.go:141] libmachine: (ha-104111-m03) define libvirt domain using xml: 
	I0729 13:29:56.449353  992950 main.go:141] libmachine: (ha-104111-m03) <domain type='kvm'>
	I0729 13:29:56.449364  992950 main.go:141] libmachine: (ha-104111-m03)   <name>ha-104111-m03</name>
	I0729 13:29:56.449374  992950 main.go:141] libmachine: (ha-104111-m03)   <memory unit='MiB'>2200</memory>
	I0729 13:29:56.449380  992950 main.go:141] libmachine: (ha-104111-m03)   <vcpu>2</vcpu>
	I0729 13:29:56.449388  992950 main.go:141] libmachine: (ha-104111-m03)   <features>
	I0729 13:29:56.449415  992950 main.go:141] libmachine: (ha-104111-m03)     <acpi/>
	I0729 13:29:56.449436  992950 main.go:141] libmachine: (ha-104111-m03)     <apic/>
	I0729 13:29:56.449448  992950 main.go:141] libmachine: (ha-104111-m03)     <pae/>
	I0729 13:29:56.449456  992950 main.go:141] libmachine: (ha-104111-m03)     
	I0729 13:29:56.449461  992950 main.go:141] libmachine: (ha-104111-m03)   </features>
	I0729 13:29:56.449469  992950 main.go:141] libmachine: (ha-104111-m03)   <cpu mode='host-passthrough'>
	I0729 13:29:56.449476  992950 main.go:141] libmachine: (ha-104111-m03)   
	I0729 13:29:56.449486  992950 main.go:141] libmachine: (ha-104111-m03)   </cpu>
	I0729 13:29:56.449498  992950 main.go:141] libmachine: (ha-104111-m03)   <os>
	I0729 13:29:56.449512  992950 main.go:141] libmachine: (ha-104111-m03)     <type>hvm</type>
	I0729 13:29:56.449523  992950 main.go:141] libmachine: (ha-104111-m03)     <boot dev='cdrom'/>
	I0729 13:29:56.449533  992950 main.go:141] libmachine: (ha-104111-m03)     <boot dev='hd'/>
	I0729 13:29:56.449551  992950 main.go:141] libmachine: (ha-104111-m03)     <bootmenu enable='no'/>
	I0729 13:29:56.449559  992950 main.go:141] libmachine: (ha-104111-m03)   </os>
	I0729 13:29:56.449567  992950 main.go:141] libmachine: (ha-104111-m03)   <devices>
	I0729 13:29:56.449583  992950 main.go:141] libmachine: (ha-104111-m03)     <disk type='file' device='cdrom'>
	I0729 13:29:56.449600  992950 main.go:141] libmachine: (ha-104111-m03)       <source file='/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/boot2docker.iso'/>
	I0729 13:29:56.449611  992950 main.go:141] libmachine: (ha-104111-m03)       <target dev='hdc' bus='scsi'/>
	I0729 13:29:56.449622  992950 main.go:141] libmachine: (ha-104111-m03)       <readonly/>
	I0729 13:29:56.449632  992950 main.go:141] libmachine: (ha-104111-m03)     </disk>
	I0729 13:29:56.449648  992950 main.go:141] libmachine: (ha-104111-m03)     <disk type='file' device='disk'>
	I0729 13:29:56.449660  992950 main.go:141] libmachine: (ha-104111-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 13:29:56.449676  992950 main.go:141] libmachine: (ha-104111-m03)       <source file='/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/ha-104111-m03.rawdisk'/>
	I0729 13:29:56.449687  992950 main.go:141] libmachine: (ha-104111-m03)       <target dev='hda' bus='virtio'/>
	I0729 13:29:56.449707  992950 main.go:141] libmachine: (ha-104111-m03)     </disk>
	I0729 13:29:56.449721  992950 main.go:141] libmachine: (ha-104111-m03)     <interface type='network'>
	I0729 13:29:56.449729  992950 main.go:141] libmachine: (ha-104111-m03)       <source network='mk-ha-104111'/>
	I0729 13:29:56.449736  992950 main.go:141] libmachine: (ha-104111-m03)       <model type='virtio'/>
	I0729 13:29:56.449748  992950 main.go:141] libmachine: (ha-104111-m03)     </interface>
	I0729 13:29:56.449759  992950 main.go:141] libmachine: (ha-104111-m03)     <interface type='network'>
	I0729 13:29:56.449768  992950 main.go:141] libmachine: (ha-104111-m03)       <source network='default'/>
	I0729 13:29:56.449778  992950 main.go:141] libmachine: (ha-104111-m03)       <model type='virtio'/>
	I0729 13:29:56.449789  992950 main.go:141] libmachine: (ha-104111-m03)     </interface>
	I0729 13:29:56.449799  992950 main.go:141] libmachine: (ha-104111-m03)     <serial type='pty'>
	I0729 13:29:56.449808  992950 main.go:141] libmachine: (ha-104111-m03)       <target port='0'/>
	I0729 13:29:56.449814  992950 main.go:141] libmachine: (ha-104111-m03)     </serial>
	I0729 13:29:56.449822  992950 main.go:141] libmachine: (ha-104111-m03)     <console type='pty'>
	I0729 13:29:56.449833  992950 main.go:141] libmachine: (ha-104111-m03)       <target type='serial' port='0'/>
	I0729 13:29:56.449845  992950 main.go:141] libmachine: (ha-104111-m03)     </console>
	I0729 13:29:56.449859  992950 main.go:141] libmachine: (ha-104111-m03)     <rng model='virtio'>
	I0729 13:29:56.449873  992950 main.go:141] libmachine: (ha-104111-m03)       <backend model='random'>/dev/random</backend>
	I0729 13:29:56.449883  992950 main.go:141] libmachine: (ha-104111-m03)     </rng>
	I0729 13:29:56.449891  992950 main.go:141] libmachine: (ha-104111-m03)     
	I0729 13:29:56.449901  992950 main.go:141] libmachine: (ha-104111-m03)     
	I0729 13:29:56.449912  992950 main.go:141] libmachine: (ha-104111-m03)   </devices>
	I0729 13:29:56.449927  992950 main.go:141] libmachine: (ha-104111-m03) </domain>
	I0729 13:29:56.449939  992950 main.go:141] libmachine: (ha-104111-m03) 
	I0729 13:29:56.457215  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:8c:01:54 in network default
	I0729 13:29:56.457786  992950 main.go:141] libmachine: (ha-104111-m03) Ensuring networks are active...
	I0729 13:29:56.457811  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:29:56.458421  992950 main.go:141] libmachine: (ha-104111-m03) Ensuring network default is active
	I0729 13:29:56.458737  992950 main.go:141] libmachine: (ha-104111-m03) Ensuring network mk-ha-104111 is active
	I0729 13:29:56.459072  992950 main.go:141] libmachine: (ha-104111-m03) Getting domain xml...
	I0729 13:29:56.459756  992950 main.go:141] libmachine: (ha-104111-m03) Creating domain...
	I0729 13:29:57.501999  992950 main.go:141] libmachine: (ha-104111-m03) Waiting to get IP...
	I0729 13:29:57.502803  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:29:57.503209  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:29:57.503262  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:29:57.503180  993689 retry.go:31] will retry after 298.469962ms: waiting for machine to come up
	I0729 13:29:57.803846  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:29:57.804428  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:29:57.804459  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:29:57.804372  993689 retry.go:31] will retry after 381.821251ms: waiting for machine to come up
	I0729 13:29:58.187924  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:29:58.188495  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:29:58.188523  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:29:58.188455  993689 retry.go:31] will retry after 434.823731ms: waiting for machine to come up
	I0729 13:29:58.625115  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:29:58.625596  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:29:58.625626  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:29:58.625546  993689 retry.go:31] will retry after 407.070954ms: waiting for machine to come up
	I0729 13:29:59.033847  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:29:59.034305  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:29:59.034337  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:29:59.034245  993689 retry.go:31] will retry after 705.30597ms: waiting for machine to come up
	I0729 13:29:59.741197  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:29:59.741542  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:29:59.741569  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:29:59.741514  993689 retry.go:31] will retry after 735.075984ms: waiting for machine to come up
	I0729 13:30:00.478330  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:00.478782  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:30:00.478820  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:30:00.478706  993689 retry.go:31] will retry after 775.52294ms: waiting for machine to come up
	I0729 13:30:01.255703  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:01.256209  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:30:01.256236  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:30:01.256147  993689 retry.go:31] will retry after 1.484398935s: waiting for machine to come up
	I0729 13:30:02.742528  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:02.742969  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:30:02.742999  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:30:02.742900  993689 retry.go:31] will retry after 1.641905411s: waiting for machine to come up
	I0729 13:30:04.386251  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:04.386697  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:30:04.386726  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:30:04.386639  993689 retry.go:31] will retry after 2.116497134s: waiting for machine to come up
	I0729 13:30:06.505074  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:06.505599  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:30:06.505629  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:30:06.505541  993689 retry.go:31] will retry after 2.589119157s: waiting for machine to come up
	I0729 13:30:09.097703  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:09.098114  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:30:09.098141  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:30:09.098056  993689 retry.go:31] will retry after 2.52148825s: waiting for machine to come up
	I0729 13:30:11.621108  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:11.621529  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:30:11.621559  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:30:11.621473  993689 retry.go:31] will retry after 3.286341726s: waiting for machine to come up
	I0729 13:30:14.911901  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:14.912230  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:30:14.912253  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:30:14.912211  993689 retry.go:31] will retry after 5.551884704s: waiting for machine to come up
	I0729 13:30:20.469159  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:20.469676  992950 main.go:141] libmachine: (ha-104111-m03) Found IP for machine: 192.168.39.202
	I0729 13:30:20.469709  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has current primary IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:20.469718  992950 main.go:141] libmachine: (ha-104111-m03) Reserving static IP address...
	I0729 13:30:20.470068  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find host DHCP lease matching {name: "ha-104111-m03", mac: "52:54:00:4a:86:be", ip: "192.168.39.202"} in network mk-ha-104111
	I0729 13:30:20.544666  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Getting to WaitForSSH function...
	I0729 13:30:20.544699  992950 main.go:141] libmachine: (ha-104111-m03) Reserved static IP address: 192.168.39.202
	I0729 13:30:20.544713  992950 main.go:141] libmachine: (ha-104111-m03) Waiting for SSH to be available...
	I0729 13:30:20.547598  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:20.548127  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:20.548150  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:20.548322  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Using SSH client type: external
	I0729 13:30:20.548352  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/id_rsa (-rw-------)
	I0729 13:30:20.548385  992950 main.go:141] libmachine: (ha-104111-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:30:20.548401  992950 main.go:141] libmachine: (ha-104111-m03) DBG | About to run SSH command:
	I0729 13:30:20.548427  992950 main.go:141] libmachine: (ha-104111-m03) DBG | exit 0
	I0729 13:30:20.676697  992950 main.go:141] libmachine: (ha-104111-m03) DBG | SSH cmd err, output: <nil>: 
	I0729 13:30:20.677036  992950 main.go:141] libmachine: (ha-104111-m03) KVM machine creation complete!
	I0729 13:30:20.677379  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetConfigRaw
	I0729 13:30:20.677988  992950 main.go:141] libmachine: (ha-104111-m03) Calling .DriverName
	I0729 13:30:20.678208  992950 main.go:141] libmachine: (ha-104111-m03) Calling .DriverName
	I0729 13:30:20.678403  992950 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 13:30:20.678419  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetState
	I0729 13:30:20.679836  992950 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 13:30:20.679856  992950 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 13:30:20.679867  992950 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 13:30:20.679876  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:30:20.681994  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:20.682351  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:20.682392  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:20.682491  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:30:20.682718  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:20.682875  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:20.683016  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:30:20.683200  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:30:20.683545  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0729 13:30:20.683563  992950 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 13:30:20.787942  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:30:20.787977  992950 main.go:141] libmachine: Detecting the provisioner...
	I0729 13:30:20.787989  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:30:20.790932  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:20.791361  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:20.791388  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:20.791594  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:30:20.791816  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:20.792009  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:20.792192  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:30:20.792362  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:30:20.792557  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0729 13:30:20.792570  992950 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 13:30:20.896951  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 13:30:20.897050  992950 main.go:141] libmachine: found compatible host: buildroot
	I0729 13:30:20.897066  992950 main.go:141] libmachine: Provisioning with buildroot...
	I0729 13:30:20.897077  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetMachineName
	I0729 13:30:20.897340  992950 buildroot.go:166] provisioning hostname "ha-104111-m03"
	I0729 13:30:20.897371  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetMachineName
	I0729 13:30:20.897578  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:30:20.899994  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:20.900430  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:20.900460  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:20.900628  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:30:20.900815  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:20.900978  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:20.901122  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:30:20.901292  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:30:20.901485  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0729 13:30:20.901498  992950 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-104111-m03 && echo "ha-104111-m03" | sudo tee /etc/hostname
	I0729 13:30:21.019248  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-104111-m03
	
	I0729 13:30:21.019280  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:30:21.022097  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.022588  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:21.022619  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.022795  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:30:21.022992  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:21.023171  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:21.023351  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:30:21.023553  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:30:21.023743  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0729 13:30:21.023757  992950 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-104111-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-104111-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-104111-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:30:21.139606  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:30:21.139657  992950 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 13:30:21.139686  992950 buildroot.go:174] setting up certificates
	I0729 13:30:21.139702  992950 provision.go:84] configureAuth start
	I0729 13:30:21.139721  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetMachineName
	I0729 13:30:21.140056  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetIP
	I0729 13:30:21.142856  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.143218  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:21.143249  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.143387  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:30:21.145592  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.145993  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:21.146028  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.146134  992950 provision.go:143] copyHostCerts
	I0729 13:30:21.146169  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 13:30:21.146215  992950 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 13:30:21.146227  992950 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 13:30:21.146309  992950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 13:30:21.146433  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 13:30:21.146460  992950 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 13:30:21.146470  992950 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 13:30:21.146506  992950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 13:30:21.146573  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 13:30:21.146596  992950 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 13:30:21.146605  992950 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 13:30:21.146639  992950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 13:30:21.146703  992950 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.ha-104111-m03 san=[127.0.0.1 192.168.39.202 ha-104111-m03 localhost minikube]
	I0729 13:30:21.316822  992950 provision.go:177] copyRemoteCerts
	I0729 13:30:21.316901  992950 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:30:21.316935  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:30:21.319677  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.320091  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:21.320125  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.320317  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:30:21.320533  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:21.320709  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:30:21.320816  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/id_rsa Username:docker}
	I0729 13:30:21.403366  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 13:30:21.403436  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:30:21.428787  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 13:30:21.428855  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 13:30:21.453361  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 13:30:21.453454  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:30:21.477890  992950 provision.go:87] duration metric: took 338.17007ms to configureAuth
	I0729 13:30:21.477919  992950 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:30:21.478156  992950 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:30:21.478254  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:30:21.480971  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.481358  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:21.481390  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.481577  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:30:21.481795  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:21.481996  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:21.482132  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:30:21.482312  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:30:21.482475  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0729 13:30:21.482489  992950 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:30:21.755980  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:30:21.756019  992950 main.go:141] libmachine: Checking connection to Docker...
	I0729 13:30:21.756032  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetURL
	I0729 13:30:21.757432  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Using libvirt version 6000000
	I0729 13:30:21.759897  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.760258  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:21.760284  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.760514  992950 main.go:141] libmachine: Docker is up and running!
	I0729 13:30:21.760534  992950 main.go:141] libmachine: Reticulating splines...
	I0729 13:30:21.760544  992950 client.go:171] duration metric: took 25.786731153s to LocalClient.Create
	I0729 13:30:21.760574  992950 start.go:167] duration metric: took 25.786802086s to libmachine.API.Create "ha-104111"
	I0729 13:30:21.760588  992950 start.go:293] postStartSetup for "ha-104111-m03" (driver="kvm2")
	I0729 13:30:21.760601  992950 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:30:21.760634  992950 main.go:141] libmachine: (ha-104111-m03) Calling .DriverName
	I0729 13:30:21.760948  992950 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:30:21.760979  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:30:21.763320  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.763742  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:21.763769  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.764026  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:30:21.764246  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:21.764443  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:30:21.764596  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/id_rsa Username:docker}
	I0729 13:30:21.847034  992950 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:30:21.851637  992950 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:30:21.851664  992950 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 13:30:21.851727  992950 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 13:30:21.851798  992950 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 13:30:21.851810  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> /etc/ssl/certs/9820462.pem
	I0729 13:30:21.851888  992950 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:30:21.861246  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 13:30:21.885557  992950 start.go:296] duration metric: took 124.943682ms for postStartSetup
	I0729 13:30:21.885625  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetConfigRaw
	I0729 13:30:21.886214  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetIP
	I0729 13:30:21.888965  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.889335  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:21.889362  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.889604  992950 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/config.json ...
	I0729 13:30:21.889793  992950 start.go:128] duration metric: took 25.935449767s to createHost
	I0729 13:30:21.889818  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:30:21.892440  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.892894  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:21.892924  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.893045  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:30:21.893277  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:21.893483  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:21.893715  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:30:21.893933  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:30:21.894188  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0729 13:30:21.894204  992950 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:30:22.000945  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722259821.979624969
	
	I0729 13:30:22.000977  992950 fix.go:216] guest clock: 1722259821.979624969
	I0729 13:30:22.000988  992950 fix.go:229] Guest: 2024-07-29 13:30:21.979624969 +0000 UTC Remote: 2024-07-29 13:30:21.889805218 +0000 UTC m=+151.135809526 (delta=89.819751ms)
	I0729 13:30:22.001008  992950 fix.go:200] guest clock delta is within tolerance: 89.819751ms
	I0729 13:30:22.001016  992950 start.go:83] releasing machines lock for "ha-104111-m03", held for 26.046780651s
	I0729 13:30:22.001044  992950 main.go:141] libmachine: (ha-104111-m03) Calling .DriverName
	I0729 13:30:22.001303  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetIP
	I0729 13:30:22.003814  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:22.004227  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:22.004256  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:22.006586  992950 out.go:177] * Found network options:
	I0729 13:30:22.008030  992950 out.go:177]   - NO_PROXY=192.168.39.120,192.168.39.140
	W0729 13:30:22.009191  992950 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 13:30:22.009215  992950 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 13:30:22.009229  992950 main.go:141] libmachine: (ha-104111-m03) Calling .DriverName
	I0729 13:30:22.009810  992950 main.go:141] libmachine: (ha-104111-m03) Calling .DriverName
	I0729 13:30:22.010000  992950 main.go:141] libmachine: (ha-104111-m03) Calling .DriverName
	I0729 13:30:22.010103  992950 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:30:22.010144  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	W0729 13:30:22.010214  992950 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 13:30:22.010241  992950 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 13:30:22.010304  992950 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:30:22.010327  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:30:22.013130  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:22.013155  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:22.013541  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:22.013573  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:22.013601  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:22.013617  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:22.013666  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:30:22.013857  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:30:22.013900  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:22.014053  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:22.014053  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:30:22.014235  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/id_rsa Username:docker}
	I0729 13:30:22.014254  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:30:22.014400  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/id_rsa Username:docker}
	I0729 13:30:22.255424  992950 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:30:22.261881  992950 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:30:22.261952  992950 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:30:22.278604  992950 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:30:22.278629  992950 start.go:495] detecting cgroup driver to use...
	I0729 13:30:22.278697  992950 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:30:22.295977  992950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:30:22.309347  992950 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:30:22.309397  992950 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:30:22.323713  992950 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:30:22.337031  992950 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:30:22.473574  992950 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:30:22.617495  992950 docker.go:233] disabling docker service ...
	I0729 13:30:22.617591  992950 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:30:22.633090  992950 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:30:22.646863  992950 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:30:22.789995  992950 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:30:22.922701  992950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:30:22.937901  992950 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:30:22.956150  992950 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:30:22.956231  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:30:22.966595  992950 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:30:22.966668  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:30:22.977248  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:30:22.987581  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:30:22.997738  992950 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:30:23.008451  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:30:23.018602  992950 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:30:23.036643  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:30:23.047392  992950 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:30:23.056283  992950 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:30:23.056336  992950 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:30:23.069826  992950 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:30:23.079322  992950 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:30:23.205697  992950 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:30:23.347229  992950 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:30:23.347318  992950 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:30:23.352355  992950 start.go:563] Will wait 60s for crictl version
	I0729 13:30:23.352426  992950 ssh_runner.go:195] Run: which crictl
	I0729 13:30:23.356428  992950 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:30:23.398875  992950 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:30:23.398966  992950 ssh_runner.go:195] Run: crio --version
	I0729 13:30:23.426519  992950 ssh_runner.go:195] Run: crio --version
	I0729 13:30:23.458962  992950 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:30:23.460276  992950 out.go:177]   - env NO_PROXY=192.168.39.120
	I0729 13:30:23.461523  992950 out.go:177]   - env NO_PROXY=192.168.39.120,192.168.39.140
	I0729 13:30:23.462580  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetIP
	I0729 13:30:23.465387  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:23.465731  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:23.465762  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:23.465993  992950 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 13:30:23.470979  992950 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:30:23.484119  992950 mustload.go:65] Loading cluster: ha-104111
	I0729 13:30:23.484371  992950 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:30:23.484678  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:30:23.484719  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:30:23.499671  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38223
	I0729 13:30:23.500085  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:30:23.500633  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:30:23.500658  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:30:23.501038  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:30:23.501245  992950 main.go:141] libmachine: (ha-104111) Calling .GetState
	I0729 13:30:23.502886  992950 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:30:23.503180  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:30:23.503222  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:30:23.518531  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43759
	I0729 13:30:23.518906  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:30:23.519357  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:30:23.519378  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:30:23.519682  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:30:23.519889  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:30:23.520051  992950 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111 for IP: 192.168.39.202
	I0729 13:30:23.520064  992950 certs.go:194] generating shared ca certs ...
	I0729 13:30:23.520083  992950 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:30:23.520218  992950 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 13:30:23.520254  992950 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 13:30:23.520264  992950 certs.go:256] generating profile certs ...
	I0729 13:30:23.520333  992950 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.key
	I0729 13:30:23.520359  992950 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.8a8c6f18
	I0729 13:30:23.520375  992950 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.8a8c6f18 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.120 192.168.39.140 192.168.39.202 192.168.39.254]
	I0729 13:30:23.883932  992950 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.8a8c6f18 ...
	I0729 13:30:23.883966  992950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.8a8c6f18: {Name:mk0835a088a954b28031c8441d71a4cb8d6f5a8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:30:23.884140  992950 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.8a8c6f18 ...
	I0729 13:30:23.884154  992950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.8a8c6f18: {Name:mk22c6a1527399308bc4fbf7c2a49423798bba4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:30:23.884227  992950 certs.go:381] copying /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.8a8c6f18 -> /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt
	I0729 13:30:23.884349  992950 certs.go:385] copying /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.8a8c6f18 -> /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key
	I0729 13:30:23.884493  992950 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key
	I0729 13:30:23.884516  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 13:30:23.884531  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 13:30:23.884543  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 13:30:23.884556  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 13:30:23.884568  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 13:30:23.884579  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 13:30:23.884590  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 13:30:23.884602  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 13:30:23.884651  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 13:30:23.884678  992950 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 13:30:23.884688  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:30:23.884710  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:30:23.884730  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:30:23.884750  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 13:30:23.884785  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 13:30:23.884809  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem -> /usr/share/ca-certificates/982046.pem
	I0729 13:30:23.884823  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> /usr/share/ca-certificates/9820462.pem
	I0729 13:30:23.884835  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:30:23.884877  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:30:23.888048  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:30:23.888505  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:30:23.888535  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:30:23.888694  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:30:23.888939  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:30:23.889112  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:30:23.889264  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:30:23.964811  992950 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 13:30:23.971040  992950 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 13:30:23.982962  992950 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 13:30:23.987609  992950 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0729 13:30:23.998540  992950 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 13:30:24.002835  992950 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 13:30:24.013962  992950 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 13:30:24.018301  992950 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0729 13:30:24.029024  992950 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 13:30:24.033300  992950 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 13:30:24.045031  992950 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 13:30:24.050048  992950 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0729 13:30:24.061058  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:30:24.085701  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:30:24.108988  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:30:24.132493  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 13:30:24.156981  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0729 13:30:24.181050  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 13:30:24.206593  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:30:24.230412  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 13:30:24.253970  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 13:30:24.277190  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 13:30:24.299653  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:30:24.321868  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 13:30:24.337623  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0729 13:30:24.353693  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 13:30:24.369196  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0729 13:30:24.384909  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 13:30:24.401470  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0729 13:30:24.419437  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 13:30:24.436902  992950 ssh_runner.go:195] Run: openssl version
	I0729 13:30:24.442870  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 13:30:24.453688  992950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 13:30:24.458054  992950 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 13:30:24.458106  992950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 13:30:24.463889  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 13:30:24.475761  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 13:30:24.486831  992950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 13:30:24.491240  992950 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 13:30:24.491302  992950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 13:30:24.497104  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:30:24.507186  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:30:24.518206  992950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:30:24.522594  992950 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:30:24.522645  992950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:30:24.528455  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:30:24.538684  992950 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:30:24.542773  992950 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 13:30:24.542833  992950 kubeadm.go:934] updating node {m03 192.168.39.202 8443 v1.30.3 crio true true} ...
	I0729 13:30:24.542929  992950 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-104111-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-104111 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:30:24.542964  992950 kube-vip.go:115] generating kube-vip config ...
	I0729 13:30:24.543000  992950 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 13:30:24.558593  992950 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 13:30:24.558671  992950 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 13:30:24.558743  992950 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 13:30:24.569427  992950 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 13:30:24.569489  992950 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 13:30:24.578874  992950 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0729 13:30:24.578906  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 13:30:24.578962  992950 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 13:30:24.578880  992950 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0729 13:30:24.578881  992950 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 13:30:24.579022  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 13:30:24.579041  992950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:30:24.579092  992950 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 13:30:24.583233  992950 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 13:30:24.583260  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 13:30:24.619828  992950 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 13:30:24.619841  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 13:30:24.619870  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 13:30:24.619953  992950 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 13:30:24.662888  992950 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 13:30:24.662938  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 13:30:25.467099  992950 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 13:30:25.476637  992950 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0729 13:30:25.493254  992950 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:30:25.509447  992950 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 13:30:25.526326  992950 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 13:30:25.530232  992950 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:30:25.542198  992950 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:30:25.680050  992950 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:30:25.696990  992950 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:30:25.697435  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:30:25.697491  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:30:25.712979  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34381
	I0729 13:30:25.713467  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:30:25.713992  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:30:25.714023  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:30:25.714399  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:30:25.714669  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:30:25.714844  992950 start.go:317] joinCluster: &{Name:ha-104111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-104111 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:30:25.714980  992950 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 13:30:25.715004  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:30:25.717843  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:30:25.718353  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:30:25.718386  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:30:25.718564  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:30:25.718777  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:30:25.718943  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:30:25.719120  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:30:25.891140  992950 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:30:25.891207  992950 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jsru0n.6tmnapp7fvkxu10o --discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-104111-m03 --control-plane --apiserver-advertise-address=192.168.39.202 --apiserver-bind-port=8443"
	I0729 13:30:50.044737  992950 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jsru0n.6tmnapp7fvkxu10o --discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-104111-m03 --control-plane --apiserver-advertise-address=192.168.39.202 --apiserver-bind-port=8443": (24.153495806s)
	I0729 13:30:50.044780  992950 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 13:30:50.665990  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-104111-m03 minikube.k8s.io/updated_at=2024_07_29T13_30_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411 minikube.k8s.io/name=ha-104111 minikube.k8s.io/primary=false
	I0729 13:30:50.788873  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-104111-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 13:30:50.906265  992950 start.go:319] duration metric: took 25.191408416s to joinCluster
	I0729 13:30:50.906361  992950 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:30:50.906679  992950 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:30:50.907917  992950 out.go:177] * Verifying Kubernetes components...
	I0729 13:30:50.909274  992950 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:30:51.264205  992950 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:30:51.290785  992950 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 13:30:51.291111  992950 kapi.go:59] client config for ha-104111: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.crt", KeyFile:"/home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.key", CAFile:"/home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 13:30:51.291177  992950 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.120:8443
	I0729 13:30:51.291405  992950 node_ready.go:35] waiting up to 6m0s for node "ha-104111-m03" to be "Ready" ...
	I0729 13:30:51.291509  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:51.291519  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:51.291529  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:51.291540  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:51.294765  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:51.792443  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:51.792468  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:51.792478  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:51.792483  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:51.795927  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:52.292584  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:52.292605  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:52.292614  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:52.292618  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:52.295905  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:52.791812  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:52.791834  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:52.791841  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:52.791844  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:52.795407  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:53.292160  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:53.292182  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:53.292190  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:53.292196  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:53.295098  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:30:53.295609  992950 node_ready.go:53] node "ha-104111-m03" has status "Ready":"False"
	I0729 13:30:53.791680  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:53.791704  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:53.791714  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:53.791718  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:53.795013  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:54.292006  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:54.292032  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:54.292045  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:54.292052  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:54.297003  992950 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 13:30:54.792212  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:54.792235  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:54.792244  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:54.792251  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:54.795284  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:55.292380  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:55.292430  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:55.292444  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:55.292451  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:55.295654  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:55.296123  992950 node_ready.go:53] node "ha-104111-m03" has status "Ready":"False"
	I0729 13:30:55.791592  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:55.791617  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:55.791628  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:55.791633  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:55.795193  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:56.292358  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:56.292390  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:56.292402  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:56.292428  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:56.296131  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:56.792032  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:56.792064  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:56.792074  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:56.792078  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:56.795178  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:57.292556  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:57.292579  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:57.292587  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:57.292590  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:57.295538  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:30:57.792190  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:57.792214  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:57.792222  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:57.792226  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:57.795481  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:57.796329  992950 node_ready.go:53] node "ha-104111-m03" has status "Ready":"False"
	I0729 13:30:58.292138  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:58.292164  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:58.292173  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:58.292179  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:58.295374  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:58.792074  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:58.792100  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:58.792123  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:58.792141  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:58.795447  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:59.291598  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:59.291622  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:59.291631  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:59.291641  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:59.295477  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:59.792580  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:59.792606  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:59.792615  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:59.792619  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:59.796075  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:59.796850  992950 node_ready.go:53] node "ha-104111-m03" has status "Ready":"False"
	I0729 13:31:00.291797  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:00.291823  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:00.291834  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:00.291839  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:00.295347  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:00.792261  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:00.792288  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:00.792300  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:00.792306  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:00.796719  992950 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 13:31:01.291849  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:01.291873  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:01.291882  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:01.291886  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:01.295201  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:01.792043  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:01.792073  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:01.792086  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:01.792092  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:01.795257  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:02.291801  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:02.291828  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:02.291839  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:02.291848  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:02.295433  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:02.295900  992950 node_ready.go:53] node "ha-104111-m03" has status "Ready":"False"
	I0729 13:31:02.792357  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:02.792381  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:02.792390  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:02.792395  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:02.796290  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:03.292642  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:03.292668  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:03.292680  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:03.292686  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:03.296168  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:03.791911  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:03.791935  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:03.791942  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:03.791947  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:03.795429  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:04.292377  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:04.292403  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:04.292425  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:04.292431  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:04.295741  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:04.296247  992950 node_ready.go:53] node "ha-104111-m03" has status "Ready":"False"
	I0729 13:31:04.791591  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:04.791614  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:04.791623  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:04.791631  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:04.794960  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:05.292230  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:05.292259  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:05.292270  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:05.292278  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:05.295806  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:05.791592  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:05.791613  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:05.791621  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:05.791627  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:05.794832  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:06.292260  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:06.292283  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:06.292291  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:06.292296  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:06.295902  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:06.296633  992950 node_ready.go:53] node "ha-104111-m03" has status "Ready":"False"
	I0729 13:31:06.791840  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:06.791864  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:06.791873  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:06.791877  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:06.795015  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:06.795654  992950 node_ready.go:49] node "ha-104111-m03" has status "Ready":"True"
	I0729 13:31:06.795672  992950 node_ready.go:38] duration metric: took 15.504251916s for node "ha-104111-m03" to be "Ready" ...
	I0729 13:31:06.795681  992950 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:31:06.795746  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods
	I0729 13:31:06.795755  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:06.795762  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:06.795769  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:06.803015  992950 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 13:31:06.809007  992950 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9jrnl" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:06.809109  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9jrnl
	I0729 13:31:06.809121  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:06.809131  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:06.809138  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:06.812118  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:31:06.812829  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:31:06.812849  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:06.812859  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:06.812864  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:06.815318  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:31:06.815905  992950 pod_ready.go:92] pod "coredns-7db6d8ff4d-9jrnl" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:06.815926  992950 pod_ready.go:81] duration metric: took 6.8965ms for pod "coredns-7db6d8ff4d-9jrnl" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:06.815935  992950 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gcf7q" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:06.815984  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gcf7q
	I0729 13:31:06.815991  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:06.815998  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:06.816001  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:06.818782  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:31:06.819606  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:31:06.819624  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:06.819634  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:06.819640  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:06.822497  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:31:06.823059  992950 pod_ready.go:92] pod "coredns-7db6d8ff4d-gcf7q" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:06.823077  992950 pod_ready.go:81] duration metric: took 7.13506ms for pod "coredns-7db6d8ff4d-gcf7q" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:06.823091  992950 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:06.823146  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/etcd-ha-104111
	I0729 13:31:06.823156  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:06.823166  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:06.823171  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:06.825912  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:31:06.826341  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:31:06.826357  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:06.826367  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:06.826374  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:06.828314  992950 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0729 13:31:06.828788  992950 pod_ready.go:92] pod "etcd-ha-104111" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:06.828811  992950 pod_ready.go:81] duration metric: took 5.712779ms for pod "etcd-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:06.828822  992950 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:06.828881  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/etcd-ha-104111-m02
	I0729 13:31:06.828891  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:06.828901  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:06.828908  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:06.830972  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:31:06.831472  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:31:06.831488  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:06.831499  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:06.831507  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:06.834527  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:06.835237  992950 pod_ready.go:92] pod "etcd-ha-104111-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:06.835254  992950 pod_ready.go:81] duration metric: took 6.425388ms for pod "etcd-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:06.835265  992950 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-104111-m03" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:06.992701  992950 request.go:629] Waited for 157.355132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/etcd-ha-104111-m03
	I0729 13:31:06.992772  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/etcd-ha-104111-m03
	I0729 13:31:06.992781  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:06.992789  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:06.992797  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:06.995815  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:31:07.192562  992950 request.go:629] Waited for 196.036575ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:07.192650  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:07.192659  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:07.192667  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:07.192676  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:07.196062  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:07.196786  992950 pod_ready.go:92] pod "etcd-ha-104111-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:07.196819  992950 pod_ready.go:81] duration metric: took 361.540851ms for pod "etcd-ha-104111-m03" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:07.196843  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:07.392052  992950 request.go:629] Waited for 195.103878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-104111
	I0729 13:31:07.392114  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-104111
	I0729 13:31:07.392119  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:07.392126  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:07.392130  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:07.395341  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:07.592486  992950 request.go:629] Waited for 196.373004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:31:07.592563  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:31:07.592570  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:07.592580  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:07.592592  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:07.595625  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:07.596166  992950 pod_ready.go:92] pod "kube-apiserver-ha-104111" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:07.596185  992950 pod_ready.go:81] duration metric: took 399.330679ms for pod "kube-apiserver-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:07.596194  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:07.792271  992950 request.go:629] Waited for 195.988198ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-104111-m02
	I0729 13:31:07.792361  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-104111-m02
	I0729 13:31:07.792368  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:07.792378  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:07.792384  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:07.796068  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:07.992076  992950 request.go:629] Waited for 195.300254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:31:07.992166  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:31:07.992176  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:07.992184  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:07.992192  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:07.995318  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:07.996012  992950 pod_ready.go:92] pod "kube-apiserver-ha-104111-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:07.996032  992950 pod_ready.go:81] duration metric: took 399.831511ms for pod "kube-apiserver-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:07.996047  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-104111-m03" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:08.192110  992950 request.go:629] Waited for 195.9662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-104111-m03
	I0729 13:31:08.192197  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-104111-m03
	I0729 13:31:08.192202  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:08.192209  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:08.192214  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:08.195456  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:08.392690  992950 request.go:629] Waited for 196.415017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:08.392765  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:08.392770  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:08.392780  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:08.392786  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:08.396033  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:08.396951  992950 pod_ready.go:92] pod "kube-apiserver-ha-104111-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:08.396973  992950 pod_ready.go:81] duration metric: took 400.915579ms for pod "kube-apiserver-ha-104111-m03" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:08.396986  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:08.591942  992950 request.go:629] Waited for 194.865072ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-104111
	I0729 13:31:08.592027  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-104111
	I0729 13:31:08.592034  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:08.592056  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:08.592080  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:08.595146  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:08.792663  992950 request.go:629] Waited for 196.754804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:31:08.792779  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:31:08.792790  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:08.792803  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:08.792810  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:08.795970  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:08.796752  992950 pod_ready.go:92] pod "kube-controller-manager-ha-104111" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:08.796775  992950 pod_ready.go:81] duration metric: took 399.78008ms for pod "kube-controller-manager-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:08.796789  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:08.992803  992950 request.go:629] Waited for 195.916125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-104111-m02
	I0729 13:31:08.992866  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-104111-m02
	I0729 13:31:08.992872  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:08.992880  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:08.992885  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:08.996024  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:09.191925  992950 request.go:629] Waited for 195.149373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:31:09.192005  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:31:09.192013  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:09.192023  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:09.192030  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:09.196960  992950 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 13:31:09.197567  992950 pod_ready.go:92] pod "kube-controller-manager-ha-104111-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:09.197595  992950 pod_ready.go:81] duration metric: took 400.798938ms for pod "kube-controller-manager-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:09.197608  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-104111-m03" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:09.392759  992950 request.go:629] Waited for 195.045643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-104111-m03
	I0729 13:31:09.392822  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-104111-m03
	I0729 13:31:09.392827  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:09.392835  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:09.392839  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:09.396330  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:09.592501  992950 request.go:629] Waited for 195.398969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:09.592574  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:09.592579  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:09.592593  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:09.592600  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:09.596474  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:09.597178  992950 pod_ready.go:92] pod "kube-controller-manager-ha-104111-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:09.597207  992950 pod_ready.go:81] duration metric: took 399.586044ms for pod "kube-controller-manager-ha-104111-m03" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:09.597224  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5dnvv" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:09.792267  992950 request.go:629] Waited for 194.936789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5dnvv
	I0729 13:31:09.792386  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5dnvv
	I0729 13:31:09.792397  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:09.792425  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:09.792441  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:09.795782  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:09.991991  992950 request.go:629] Waited for 195.308276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:31:09.992053  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:31:09.992057  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:09.992065  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:09.992069  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:09.995661  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:09.996598  992950 pod_ready.go:92] pod "kube-proxy-5dnvv" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:09.996620  992950 pod_ready.go:81] duration metric: took 399.386649ms for pod "kube-proxy-5dnvv" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:09.996633  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m765x" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:10.192489  992950 request.go:629] Waited for 195.78034ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m765x
	I0729 13:31:10.192550  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m765x
	I0729 13:31:10.192556  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:10.192564  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:10.192570  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:10.195943  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:10.391862  992950 request.go:629] Waited for 195.282697ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:10.391945  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:10.391951  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:10.391959  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:10.391964  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:10.395213  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:10.396080  992950 pod_ready.go:92] pod "kube-proxy-m765x" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:10.396111  992950 pod_ready.go:81] duration metric: took 399.46256ms for pod "kube-proxy-m765x" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:10.396126  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n6kkf" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:10.592088  992950 request.go:629] Waited for 195.871002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n6kkf
	I0729 13:31:10.592163  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n6kkf
	I0729 13:31:10.592170  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:10.592180  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:10.592196  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:10.595795  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:10.792506  992950 request.go:629] Waited for 195.630063ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:31:10.792566  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:31:10.792571  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:10.792579  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:10.792584  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:10.796159  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:10.796903  992950 pod_ready.go:92] pod "kube-proxy-n6kkf" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:10.796927  992950 pod_ready.go:81] duration metric: took 400.793171ms for pod "kube-proxy-n6kkf" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:10.796937  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:10.991987  992950 request.go:629] Waited for 194.970741ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-104111
	I0729 13:31:10.992068  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-104111
	I0729 13:31:10.992074  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:10.992083  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:10.992089  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:10.996135  992950 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 13:31:11.192252  992950 request.go:629] Waited for 195.285999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:31:11.192362  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:31:11.192373  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:11.192384  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:11.192403  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:11.195684  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:11.196262  992950 pod_ready.go:92] pod "kube-scheduler-ha-104111" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:11.196283  992950 pod_ready.go:81] duration metric: took 399.338583ms for pod "kube-scheduler-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:11.196293  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:11.392363  992950 request.go:629] Waited for 195.964444ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-104111-m02
	I0729 13:31:11.392456  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-104111-m02
	I0729 13:31:11.392463  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:11.392476  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:11.392489  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:11.395644  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:11.592712  992950 request.go:629] Waited for 196.367309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:31:11.592781  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:31:11.592788  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:11.592799  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:11.592807  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:11.595949  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:11.596544  992950 pod_ready.go:92] pod "kube-scheduler-ha-104111-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:11.596568  992950 pod_ready.go:81] duration metric: took 400.267112ms for pod "kube-scheduler-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:11.596585  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-104111-m03" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:11.792562  992950 request.go:629] Waited for 195.888008ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-104111-m03
	I0729 13:31:11.792661  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-104111-m03
	I0729 13:31:11.792670  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:11.792677  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:11.792682  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:11.795752  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:11.992848  992950 request.go:629] Waited for 196.362624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:11.992931  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:11.992936  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:11.992944  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:11.992950  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:11.996242  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:11.997212  992950 pod_ready.go:92] pod "kube-scheduler-ha-104111-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:11.997230  992950 pod_ready.go:81] duration metric: took 400.632488ms for pod "kube-scheduler-ha-104111-m03" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:11.997242  992950 pod_ready.go:38] duration metric: took 5.201548599s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:31:11.997261  992950 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:31:11.997323  992950 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:31:12.015443  992950 api_server.go:72] duration metric: took 21.109037184s to wait for apiserver process to appear ...
	I0729 13:31:12.015469  992950 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:31:12.015497  992950 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 13:31:12.021527  992950 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I0729 13:31:12.021620  992950 round_trippers.go:463] GET https://192.168.39.120:8443/version
	I0729 13:31:12.021632  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:12.021647  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:12.021655  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:12.022999  992950 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0729 13:31:12.023173  992950 api_server.go:141] control plane version: v1.30.3
	I0729 13:31:12.023198  992950 api_server.go:131] duration metric: took 7.721077ms to wait for apiserver health ...
	I0729 13:31:12.023207  992950 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:31:12.192885  992950 request.go:629] Waited for 169.588554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods
	I0729 13:31:12.192954  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods
	I0729 13:31:12.192959  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:12.192971  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:12.192974  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:12.199588  992950 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 13:31:12.208358  992950 system_pods.go:59] 24 kube-system pods found
	I0729 13:31:12.208391  992950 system_pods.go:61] "coredns-7db6d8ff4d-9jrnl" [0453ed97-efb4-41c1-8bfb-e7e004e618e0] Running
	I0729 13:31:12.208397  992950 system_pods.go:61] "coredns-7db6d8ff4d-gcf7q" [196981ba-ed16-427c-ae8b-9b7e8ff36be2] Running
	I0729 13:31:12.208401  992950 system_pods.go:61] "etcd-ha-104111" [309561db-8f30-4b42-8252-e02d9a26ec2e] Running
	I0729 13:31:12.208404  992950 system_pods.go:61] "etcd-ha-104111-m02" [4f09acca-1baa-4eba-8ef4-eb3e2b64512c] Running
	I0729 13:31:12.208425  992950 system_pods.go:61] "etcd-ha-104111-m03" [abbb320d-2480-4658-b404-c765904bb5ea] Running
	I0729 13:31:12.208430  992950 system_pods.go:61] "kindnet-9phpm" [60e9c45f-5176-492e-90c7-49b0201afe1e] Running
	I0729 13:31:12.208435  992950 system_pods.go:61] "kindnet-mt9dk" [5f1433be-0f2b-4502-a586-4014c7f23495] Running
	I0729 13:31:12.208440  992950 system_pods.go:61] "kindnet-njndz" [a0477f9b-b1ff-49d8-8f39-21ffb84377e9] Running
	I0729 13:31:12.208444  992950 system_pods.go:61] "kube-apiserver-ha-104111" [d546ecd4-9bdb-4e41-9e4a-74d0c81359d5] Running
	I0729 13:31:12.208447  992950 system_pods.go:61] "kube-apiserver-ha-104111-m02" [70bd608c-3ebe-4306-8ec9-61c254ca5261] Running
	I0729 13:31:12.208450  992950 system_pods.go:61] "kube-apiserver-ha-104111-m03" [8c1333cd-2e2a-4e55-af7a-6b399d6ecefa] Running
	I0729 13:31:12.208454  992950 system_pods.go:61] "kube-controller-manager-ha-104111" [03be8232-ff90-43e1-87e0-5d61aeaa7c96] Running
	I0729 13:31:12.208457  992950 system_pods.go:61] "kube-controller-manager-ha-104111-m02" [d2ca4758-3c38-4655-8bfb-b5a64b0b6bca] Running
	I0729 13:31:12.208461  992950 system_pods.go:61] "kube-controller-manager-ha-104111-m03" [ee0172da-49de-422e-b0cc-f015e6978f15] Running
	I0729 13:31:12.208464  992950 system_pods.go:61] "kube-proxy-5dnvv" [2fb3553e-b114-4528-bf9a-1765356bb2a4] Running
	I0729 13:31:12.208467  992950 system_pods.go:61] "kube-proxy-m765x" [d1051d27-125e-48d2-a3d5-3a2e99a2a04c] Running
	I0729 13:31:12.208474  992950 system_pods.go:61] "kube-proxy-n6kkf" [4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1] Running
	I0729 13:31:12.208477  992950 system_pods.go:61] "kube-scheduler-ha-104111" [3236068e-5891-4cb7-aa91-8aaf93260f3a] Running
	I0729 13:31:12.208481  992950 system_pods.go:61] "kube-scheduler-ha-104111-m02" [01a2d6c2-859d-44e8-9d53-0d257b4b4a1c] Running
	I0729 13:31:12.208485  992950 system_pods.go:61] "kube-scheduler-ha-104111-m03" [89bb0ec2-3f86-4801-bf24-2a038894a39f] Running
	I0729 13:31:12.208488  992950 system_pods.go:61] "kube-vip-ha-104111" [edfeb506-2884-4406-92cf-c35fce56d7c4] Running
	I0729 13:31:12.208491  992950 system_pods.go:61] "kube-vip-ha-104111-m02" [bcc970d3-1717-4971-8216-7526fe2028ba] Running
	I0729 13:31:12.208495  992950 system_pods.go:61] "kube-vip-ha-104111-m03" [5bd067b4-c367-4504-8ba2-4325efaa53a4] Running
	I0729 13:31:12.208498  992950 system_pods.go:61] "storage-provisioner" [b61cc52e-771b-484a-99d6-8963665cb1e8] Running
	I0729 13:31:12.208506  992950 system_pods.go:74] duration metric: took 185.291614ms to wait for pod list to return data ...
	I0729 13:31:12.208516  992950 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:31:12.391894  992950 request.go:629] Waited for 183.299294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/default/serviceaccounts
	I0729 13:31:12.391955  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/default/serviceaccounts
	I0729 13:31:12.391960  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:12.391967  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:12.391973  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:12.395215  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:12.395349  992950 default_sa.go:45] found service account: "default"
	I0729 13:31:12.395368  992950 default_sa.go:55] duration metric: took 186.84439ms for default service account to be created ...
	I0729 13:31:12.395382  992950 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 13:31:12.592777  992950 request.go:629] Waited for 197.236409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods
	I0729 13:31:12.592845  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods
	I0729 13:31:12.592852  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:12.592859  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:12.592864  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:12.599102  992950 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 13:31:12.606276  992950 system_pods.go:86] 24 kube-system pods found
	I0729 13:31:12.606303  992950 system_pods.go:89] "coredns-7db6d8ff4d-9jrnl" [0453ed97-efb4-41c1-8bfb-e7e004e618e0] Running
	I0729 13:31:12.606309  992950 system_pods.go:89] "coredns-7db6d8ff4d-gcf7q" [196981ba-ed16-427c-ae8b-9b7e8ff36be2] Running
	I0729 13:31:12.606314  992950 system_pods.go:89] "etcd-ha-104111" [309561db-8f30-4b42-8252-e02d9a26ec2e] Running
	I0729 13:31:12.606319  992950 system_pods.go:89] "etcd-ha-104111-m02" [4f09acca-1baa-4eba-8ef4-eb3e2b64512c] Running
	I0729 13:31:12.606323  992950 system_pods.go:89] "etcd-ha-104111-m03" [abbb320d-2480-4658-b404-c765904bb5ea] Running
	I0729 13:31:12.606327  992950 system_pods.go:89] "kindnet-9phpm" [60e9c45f-5176-492e-90c7-49b0201afe1e] Running
	I0729 13:31:12.606331  992950 system_pods.go:89] "kindnet-mt9dk" [5f1433be-0f2b-4502-a586-4014c7f23495] Running
	I0729 13:31:12.606335  992950 system_pods.go:89] "kindnet-njndz" [a0477f9b-b1ff-49d8-8f39-21ffb84377e9] Running
	I0729 13:31:12.606340  992950 system_pods.go:89] "kube-apiserver-ha-104111" [d546ecd4-9bdb-4e41-9e4a-74d0c81359d5] Running
	I0729 13:31:12.606344  992950 system_pods.go:89] "kube-apiserver-ha-104111-m02" [70bd608c-3ebe-4306-8ec9-61c254ca5261] Running
	I0729 13:31:12.606349  992950 system_pods.go:89] "kube-apiserver-ha-104111-m03" [8c1333cd-2e2a-4e55-af7a-6b399d6ecefa] Running
	I0729 13:31:12.606356  992950 system_pods.go:89] "kube-controller-manager-ha-104111" [03be8232-ff90-43e1-87e0-5d61aeaa7c96] Running
	I0729 13:31:12.606360  992950 system_pods.go:89] "kube-controller-manager-ha-104111-m02" [d2ca4758-3c38-4655-8bfb-b5a64b0b6bca] Running
	I0729 13:31:12.606367  992950 system_pods.go:89] "kube-controller-manager-ha-104111-m03" [ee0172da-49de-422e-b0cc-f015e6978f15] Running
	I0729 13:31:12.606371  992950 system_pods.go:89] "kube-proxy-5dnvv" [2fb3553e-b114-4528-bf9a-1765356bb2a4] Running
	I0729 13:31:12.606376  992950 system_pods.go:89] "kube-proxy-m765x" [d1051d27-125e-48d2-a3d5-3a2e99a2a04c] Running
	I0729 13:31:12.606380  992950 system_pods.go:89] "kube-proxy-n6kkf" [4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1] Running
	I0729 13:31:12.606386  992950 system_pods.go:89] "kube-scheduler-ha-104111" [3236068e-5891-4cb7-aa91-8aaf93260f3a] Running
	I0729 13:31:12.606391  992950 system_pods.go:89] "kube-scheduler-ha-104111-m02" [01a2d6c2-859d-44e8-9d53-0d257b4b4a1c] Running
	I0729 13:31:12.606399  992950 system_pods.go:89] "kube-scheduler-ha-104111-m03" [89bb0ec2-3f86-4801-bf24-2a038894a39f] Running
	I0729 13:31:12.606408  992950 system_pods.go:89] "kube-vip-ha-104111" [edfeb506-2884-4406-92cf-c35fce56d7c4] Running
	I0729 13:31:12.606414  992950 system_pods.go:89] "kube-vip-ha-104111-m02" [bcc970d3-1717-4971-8216-7526fe2028ba] Running
	I0729 13:31:12.606423  992950 system_pods.go:89] "kube-vip-ha-104111-m03" [5bd067b4-c367-4504-8ba2-4325efaa53a4] Running
	I0729 13:31:12.606429  992950 system_pods.go:89] "storage-provisioner" [b61cc52e-771b-484a-99d6-8963665cb1e8] Running
	I0729 13:31:12.606442  992950 system_pods.go:126] duration metric: took 211.050956ms to wait for k8s-apps to be running ...
	I0729 13:31:12.606454  992950 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 13:31:12.606528  992950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:31:12.623930  992950 system_svc.go:56] duration metric: took 17.466008ms WaitForService to wait for kubelet
	I0729 13:31:12.623966  992950 kubeadm.go:582] duration metric: took 21.717566553s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:31:12.623996  992950 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:31:12.792481  992950 request.go:629] Waited for 168.383302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes
	I0729 13:31:12.792561  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes
	I0729 13:31:12.792568  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:12.792576  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:12.792581  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:12.796627  992950 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 13:31:12.797777  992950 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:31:12.797816  992950 node_conditions.go:123] node cpu capacity is 2
	I0729 13:31:12.797828  992950 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:31:12.797832  992950 node_conditions.go:123] node cpu capacity is 2
	I0729 13:31:12.797835  992950 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:31:12.797838  992950 node_conditions.go:123] node cpu capacity is 2
	I0729 13:31:12.797842  992950 node_conditions.go:105] duration metric: took 173.840002ms to run NodePressure ...
	I0729 13:31:12.797854  992950 start.go:241] waiting for startup goroutines ...
	I0729 13:31:12.797884  992950 start.go:255] writing updated cluster config ...
	I0729 13:31:12.798221  992950 ssh_runner.go:195] Run: rm -f paused
	I0729 13:31:12.853479  992950 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 13:31:12.855934  992950 out.go:177] * Done! kubectl is now configured to use "ha-104111" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 13:34:47 ha-104111 crio[679]: time="2024-07-29 13:34:47.118885500Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36aa1d33-3ae0-4149-ae7e-603ae9780ddf name=/runtime.v1.RuntimeService/Version
	Jul 29 13:34:47 ha-104111 crio[679]: time="2024-07-29 13:34:47.120400313Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=68ee439f-da6a-4564-b011-a0922c576252 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:34:47 ha-104111 crio[679]: time="2024-07-29 13:34:47.121722859Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722260087121648270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68ee439f-da6a-4564-b011-a0922c576252 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:34:47 ha-104111 crio[679]: time="2024-07-29 13:34:47.122717112Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c376bad9-28b7-4fc7-8f23-7212bd0b1ddd name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:34:47 ha-104111 crio[679]: time="2024-07-29 13:34:47.122769346Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c376bad9-28b7-4fc7-8f23-7212bd0b1ddd name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:34:47 ha-104111 crio[679]: time="2024-07-29 13:34:47.123045668Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2a033e8feb22aa74a1673da73b3c5bab08248299304e6a34a7f7064468eeb5d,PodSandboxId:ab591310b6636199927f6aca9abcc3a68cb2149e7f2001e4ffcd7ce08d966de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722259875257833512,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7xsjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fbdab29-6a6d-4b47-8df5-641b9aad98f0,},Annotations:map[string]string{io.kubernetes.container.hash: 769a90,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b86114506804a9a93ef3ca6b2254579d26919db666922aefc5ccef849a81f98,PodSandboxId:53fbe065ff1134e62b3e0364c43521280c8ec8461e8fc752dc336ac259ee602f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722259743370087937,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61cc52e-771b-484a-99d6-8963665cb1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 492a79cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721762ac4017a84454febe3fd71ab6671be9e230d7b785627b43bdafe8478d56,PodSandboxId:d0c4b0845fee9c8c5a409d1d96017b0e56e37a4fb5f685b4e69bc4626c12ffd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259743321034127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9jrnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453ed97-efb4-41c1-8bfb-e7e004e618e0,},Annotations:map[string]string{io.kubernetes.container.hash: 72f497a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81eca3ce5b15d81536c74dc285118c00ec013710df992caad1867b7c5e7f75a1,PodSandboxId:32a1e9c01260e16fc517c9caebdd1556fbbfcaabc5ff83ba679e2ce763d3ee50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259743267402214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gcf7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196981ba-ed16
-427c-ae8b-9b7e8ff36be2,},Annotations:map[string]string{io.kubernetes.container.hash: 4a624f81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcba14c355c5fc17ddee3394c79f5ffaea681079c29edd33b122d7aa80c36f1,PodSandboxId:6b9961791d750fd9d9a7d40cf02ff0c0f6e938c724b2b0787ebeb23a431b9beb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONT
AINER_RUNNING,CreatedAt:1722259731581771295,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9phpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e9c45f-5176-492e-90c7-49b0201afe1e,},Annotations:map[string]string{io.kubernetes.container.hash: 867b7308,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc357136c66b6120efe2eee1197a8d0dabec7279ff50cf8ddea25182b0d4ae8,PodSandboxId:60e3f945e9e89cf01f1a900e939ca51214ea0d79a2a69da731b49606960a6d05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:17222597280
97863200,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6kkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,},Annotations:map[string]string{io.kubernetes.container.hash: 32c1dc3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50fe26dbcca1ab5b9d9fe7412b4950133069510ae36d9795ccd4866be11174cd,PodSandboxId:5433f25fd3565411866feae928f058891d075bb4e913fce385b7e59dd41bdaeb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722259709027
885819,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143501fa8691d69c4a62f32dafe175d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e80af660361f5449de6286725b48cc816a32581672735b35e4ac2c55495983d1,PodSandboxId:a33809de7d3a6efb269fca0ca670a49eb3a11c9845507c3110c8509574ae03e0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722259707029920791,Labels:map[string]string{i
o.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2438b1d75fb1de3aa096517b67661add,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cce61f41e5d4c78880527c2ae828948f376f275ed748786922e82b85a740b3,PodSandboxId:052a439cf2eb6a200a6faaec78d6da0295c4f4ef92020ac5a8e57df53f19392e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722259707048079889,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc73c358411265f24a0fdb288ab5434e,},Annotations:map[string]string{io.kubernetes.container.hash: fa230ea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a9167ef54b81a6562000186bed478646762de7fef6329053c2018869987fdda,PodSandboxId:46fd4a1e9c0a6fe5d0cb0f78a6b0ed82fff4a6aeec47ec1a58187cc16e899e57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722259707040230251,Labels:map[string]string{io.kubernetes.container.name: kube
-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdd0eae9701cf7d4013ed5835b6fc65,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7606e1f107d6cded50cb09c9c101a2cac785cbcf697b2ffdecb599d2e148de2a,PodSandboxId:3c847132555037fab395549f498d9f9aad2f651da1470981906bc62a560c615c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722259706969247554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernete
s.pod.name: etcd-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cb06508783f1cdddfbd3cd4c58d73c,},Annotations:map[string]string{io.kubernetes.container.hash: 9edecd9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c376bad9-28b7-4fc7-8f23-7212bd0b1ddd name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:34:47 ha-104111 crio[679]: time="2024-07-29 13:34:47.166045560Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=18d4b10a-daea-4a2f-b821-127f6919cf37 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:34:47 ha-104111 crio[679]: time="2024-07-29 13:34:47.166160727Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=18d4b10a-daea-4a2f-b821-127f6919cf37 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:34:47 ha-104111 crio[679]: time="2024-07-29 13:34:47.167057725Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d8913be-3293-41f3-bcbc-2933a72381c7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:34:47 ha-104111 crio[679]: time="2024-07-29 13:34:47.167481994Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722260087167461682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d8913be-3293-41f3-bcbc-2933a72381c7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:34:47 ha-104111 crio[679]: time="2024-07-29 13:34:47.168014947Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bbe5b4cc-84e7-4760-ad2a-29fdd91f4fe8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:34:47 ha-104111 crio[679]: time="2024-07-29 13:34:47.168067836Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bbe5b4cc-84e7-4760-ad2a-29fdd91f4fe8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:34:47 ha-104111 crio[679]: time="2024-07-29 13:34:47.168295745Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2a033e8feb22aa74a1673da73b3c5bab08248299304e6a34a7f7064468eeb5d,PodSandboxId:ab591310b6636199927f6aca9abcc3a68cb2149e7f2001e4ffcd7ce08d966de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722259875257833512,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7xsjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fbdab29-6a6d-4b47-8df5-641b9aad98f0,},Annotations:map[string]string{io.kubernetes.container.hash: 769a90,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b86114506804a9a93ef3ca6b2254579d26919db666922aefc5ccef849a81f98,PodSandboxId:53fbe065ff1134e62b3e0364c43521280c8ec8461e8fc752dc336ac259ee602f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722259743370087937,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61cc52e-771b-484a-99d6-8963665cb1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 492a79cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721762ac4017a84454febe3fd71ab6671be9e230d7b785627b43bdafe8478d56,PodSandboxId:d0c4b0845fee9c8c5a409d1d96017b0e56e37a4fb5f685b4e69bc4626c12ffd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259743321034127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9jrnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453ed97-efb4-41c1-8bfb-e7e004e618e0,},Annotations:map[string]string{io.kubernetes.container.hash: 72f497a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81eca3ce5b15d81536c74dc285118c00ec013710df992caad1867b7c5e7f75a1,PodSandboxId:32a1e9c01260e16fc517c9caebdd1556fbbfcaabc5ff83ba679e2ce763d3ee50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259743267402214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gcf7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196981ba-ed16
-427c-ae8b-9b7e8ff36be2,},Annotations:map[string]string{io.kubernetes.container.hash: 4a624f81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcba14c355c5fc17ddee3394c79f5ffaea681079c29edd33b122d7aa80c36f1,PodSandboxId:6b9961791d750fd9d9a7d40cf02ff0c0f6e938c724b2b0787ebeb23a431b9beb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONT
AINER_RUNNING,CreatedAt:1722259731581771295,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9phpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e9c45f-5176-492e-90c7-49b0201afe1e,},Annotations:map[string]string{io.kubernetes.container.hash: 867b7308,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc357136c66b6120efe2eee1197a8d0dabec7279ff50cf8ddea25182b0d4ae8,PodSandboxId:60e3f945e9e89cf01f1a900e939ca51214ea0d79a2a69da731b49606960a6d05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:17222597280
97863200,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6kkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,},Annotations:map[string]string{io.kubernetes.container.hash: 32c1dc3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50fe26dbcca1ab5b9d9fe7412b4950133069510ae36d9795ccd4866be11174cd,PodSandboxId:5433f25fd3565411866feae928f058891d075bb4e913fce385b7e59dd41bdaeb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722259709027
885819,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143501fa8691d69c4a62f32dafe175d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e80af660361f5449de6286725b48cc816a32581672735b35e4ac2c55495983d1,PodSandboxId:a33809de7d3a6efb269fca0ca670a49eb3a11c9845507c3110c8509574ae03e0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722259707029920791,Labels:map[string]string{i
o.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2438b1d75fb1de3aa096517b67661add,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cce61f41e5d4c78880527c2ae828948f376f275ed748786922e82b85a740b3,PodSandboxId:052a439cf2eb6a200a6faaec78d6da0295c4f4ef92020ac5a8e57df53f19392e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722259707048079889,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc73c358411265f24a0fdb288ab5434e,},Annotations:map[string]string{io.kubernetes.container.hash: fa230ea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a9167ef54b81a6562000186bed478646762de7fef6329053c2018869987fdda,PodSandboxId:46fd4a1e9c0a6fe5d0cb0f78a6b0ed82fff4a6aeec47ec1a58187cc16e899e57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722259707040230251,Labels:map[string]string{io.kubernetes.container.name: kube
-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdd0eae9701cf7d4013ed5835b6fc65,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7606e1f107d6cded50cb09c9c101a2cac785cbcf697b2ffdecb599d2e148de2a,PodSandboxId:3c847132555037fab395549f498d9f9aad2f651da1470981906bc62a560c615c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722259706969247554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernete
s.pod.name: etcd-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cb06508783f1cdddfbd3cd4c58d73c,},Annotations:map[string]string{io.kubernetes.container.hash: 9edecd9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bbe5b4cc-84e7-4760-ad2a-29fdd91f4fe8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:34:47 ha-104111 crio[679]: time="2024-07-29 13:34:47.216505745Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ecb944cf-86cf-474b-a208-36e0162545ef name=/runtime.v1.RuntimeService/Version
	Jul 29 13:34:47 ha-104111 crio[679]: time="2024-07-29 13:34:47.216623550Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ecb944cf-86cf-474b-a208-36e0162545ef name=/runtime.v1.RuntimeService/Version
	Jul 29 13:34:47 ha-104111 crio[679]: time="2024-07-29 13:34:47.217491968Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76ba0fc6-b768-46c7-bbf2-c73c6fd4162f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:34:47 ha-104111 crio[679]: time="2024-07-29 13:34:47.218035314Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722260087218008898,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76ba0fc6-b768-46c7-bbf2-c73c6fd4162f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:34:47 ha-104111 crio[679]: time="2024-07-29 13:34:47.218730927Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=25472b3c-be8e-4f16-af31-f3f7c6a623e4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:34:47 ha-104111 crio[679]: time="2024-07-29 13:34:47.218780918Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=25472b3c-be8e-4f16-af31-f3f7c6a623e4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:34:47 ha-104111 crio[679]: time="2024-07-29 13:34:47.219031406Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2a033e8feb22aa74a1673da73b3c5bab08248299304e6a34a7f7064468eeb5d,PodSandboxId:ab591310b6636199927f6aca9abcc3a68cb2149e7f2001e4ffcd7ce08d966de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722259875257833512,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7xsjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fbdab29-6a6d-4b47-8df5-641b9aad98f0,},Annotations:map[string]string{io.kubernetes.container.hash: 769a90,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b86114506804a9a93ef3ca6b2254579d26919db666922aefc5ccef849a81f98,PodSandboxId:53fbe065ff1134e62b3e0364c43521280c8ec8461e8fc752dc336ac259ee602f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722259743370087937,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61cc52e-771b-484a-99d6-8963665cb1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 492a79cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721762ac4017a84454febe3fd71ab6671be9e230d7b785627b43bdafe8478d56,PodSandboxId:d0c4b0845fee9c8c5a409d1d96017b0e56e37a4fb5f685b4e69bc4626c12ffd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259743321034127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9jrnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453ed97-efb4-41c1-8bfb-e7e004e618e0,},Annotations:map[string]string{io.kubernetes.container.hash: 72f497a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81eca3ce5b15d81536c74dc285118c00ec013710df992caad1867b7c5e7f75a1,PodSandboxId:32a1e9c01260e16fc517c9caebdd1556fbbfcaabc5ff83ba679e2ce763d3ee50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259743267402214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gcf7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196981ba-ed16
-427c-ae8b-9b7e8ff36be2,},Annotations:map[string]string{io.kubernetes.container.hash: 4a624f81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcba14c355c5fc17ddee3394c79f5ffaea681079c29edd33b122d7aa80c36f1,PodSandboxId:6b9961791d750fd9d9a7d40cf02ff0c0f6e938c724b2b0787ebeb23a431b9beb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONT
AINER_RUNNING,CreatedAt:1722259731581771295,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9phpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e9c45f-5176-492e-90c7-49b0201afe1e,},Annotations:map[string]string{io.kubernetes.container.hash: 867b7308,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc357136c66b6120efe2eee1197a8d0dabec7279ff50cf8ddea25182b0d4ae8,PodSandboxId:60e3f945e9e89cf01f1a900e939ca51214ea0d79a2a69da731b49606960a6d05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:17222597280
97863200,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6kkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,},Annotations:map[string]string{io.kubernetes.container.hash: 32c1dc3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50fe26dbcca1ab5b9d9fe7412b4950133069510ae36d9795ccd4866be11174cd,PodSandboxId:5433f25fd3565411866feae928f058891d075bb4e913fce385b7e59dd41bdaeb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722259709027
885819,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143501fa8691d69c4a62f32dafe175d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e80af660361f5449de6286725b48cc816a32581672735b35e4ac2c55495983d1,PodSandboxId:a33809de7d3a6efb269fca0ca670a49eb3a11c9845507c3110c8509574ae03e0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722259707029920791,Labels:map[string]string{i
o.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2438b1d75fb1de3aa096517b67661add,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cce61f41e5d4c78880527c2ae828948f376f275ed748786922e82b85a740b3,PodSandboxId:052a439cf2eb6a200a6faaec78d6da0295c4f4ef92020ac5a8e57df53f19392e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722259707048079889,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc73c358411265f24a0fdb288ab5434e,},Annotations:map[string]string{io.kubernetes.container.hash: fa230ea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a9167ef54b81a6562000186bed478646762de7fef6329053c2018869987fdda,PodSandboxId:46fd4a1e9c0a6fe5d0cb0f78a6b0ed82fff4a6aeec47ec1a58187cc16e899e57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722259707040230251,Labels:map[string]string{io.kubernetes.container.name: kube
-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdd0eae9701cf7d4013ed5835b6fc65,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7606e1f107d6cded50cb09c9c101a2cac785cbcf697b2ffdecb599d2e148de2a,PodSandboxId:3c847132555037fab395549f498d9f9aad2f651da1470981906bc62a560c615c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722259706969247554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernete
s.pod.name: etcd-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cb06508783f1cdddfbd3cd4c58d73c,},Annotations:map[string]string{io.kubernetes.container.hash: 9edecd9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=25472b3c-be8e-4f16-af31-f3f7c6a623e4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:34:47 ha-104111 crio[679]: time="2024-07-29 13:34:47.240847190Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b9bfae5-1825-43ea-8db5-781b7ea47942 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 13:34:47 ha-104111 crio[679]: time="2024-07-29 13:34:47.241122918Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ab591310b6636199927f6aca9abcc3a68cb2149e7f2001e4ffcd7ce08d966de0,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-7xsjn,Uid:1fbdab29-6a6d-4b47-8df5-641b9aad98f0,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722259874094887119,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-7xsjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fbdab29-6a6d-4b47-8df5-641b9aad98f0,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T13:31:13.775614631Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d0c4b0845fee9c8c5a409d1d96017b0e56e37a4fb5f685b4e69bc4626c12ffd6,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-9jrnl,Uid:0453ed97-efb4-41c1-8bfb-e7e004e618e0,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1722259743070397300,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-9jrnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453ed97-efb4-41c1-8bfb-e7e004e618e0,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T13:29:02.747775089Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:53fbe065ff1134e62b3e0364c43521280c8ec8461e8fc752dc336ac259ee602f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b61cc52e-771b-484a-99d6-8963665cb1e8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722259743062803402,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61cc52e-771b-484a-99d6-8963665cb1e8,},Annotations:map[string]string{kubec
tl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T13:29:02.749157522Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:32a1e9c01260e16fc517c9caebdd1556fbbfcaabc5ff83ba679e2ce763d3ee50,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-gcf7q,Uid:196981ba-ed16-427c-ae8b-9b7e8ff36be2,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1722259743050028290,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-gcf7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196981ba-ed16-427c-ae8b-9b7e8ff36be2,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T13:29:02.742252134Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:60e3f945e9e89cf01f1a900e939ca51214ea0d79a2a69da731b49606960a6d05,Metadata:&PodSandboxMetadata{Name:kube-proxy-n6kkf,Uid:4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722259727970689395,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-n6kkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-07-29T13:28:46.158972524Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6b9961791d750fd9d9a7d40cf02ff0c0f6e938c724b2b0787ebeb23a431b9beb,Metadata:&PodSandboxMetadata{Name:kindnet-9phpm,Uid:60e9c45f-5176-492e-90c7-49b0201afe1e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722259727960198640,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-9phpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e9c45f-5176-492e-90c7-49b0201afe1e,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T13:28:46.149047133Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:052a439cf2eb6a200a6faaec78d6da0295c4f4ef92020ac5a8e57df53f19392e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-104111,Uid:dc73c358411265f24a0fdb288ab5434e,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1722259706794123362,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc73c358411265f24a0fdb288ab5434e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.120:8443,kubernetes.io/config.hash: dc73c358411265f24a0fdb288ab5434e,kubernetes.io/config.seen: 2024-07-29T13:28:26.319084014Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:46fd4a1e9c0a6fe5d0cb0f78a6b0ed82fff4a6aeec47ec1a58187cc16e899e57,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-104111,Uid:afdd0eae9701cf7d4013ed5835b6fc65,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722259706790763972,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-104111,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdd0eae9701cf7d4013ed5835b6fc65,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: afdd0eae9701cf7d4013ed5835b6fc65,kubernetes.io/config.seen: 2024-07-29T13:28:26.319077318Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3c847132555037fab395549f498d9f9aad2f651da1470981906bc62a560c615c,Metadata:&PodSandboxMetadata{Name:etcd-ha-104111,Uid:80cb06508783f1cdddfbd3cd4c58d73c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722259706776085857,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cb06508783f1cdddfbd3cd4c58d73c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.120:2379,kubernetes.io/config.hash: 80cb06508783f1cdddfbd3cd4c58d73c,kubernetes.io/config.seen: 2024-07-29T13:28:26.319083002Z,kube
rnetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5433f25fd3565411866feae928f058891d075bb4e913fce385b7e59dd41bdaeb,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-104111,Uid:143501fa8691d69c4a62f32dafe175d1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722259706775428946,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143501fa8691d69c4a62f32dafe175d1,},Annotations:map[string]string{kubernetes.io/config.hash: 143501fa8691d69c4a62f32dafe175d1,kubernetes.io/config.seen: 2024-07-29T13:28:26.319082054Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a33809de7d3a6efb269fca0ca670a49eb3a11c9845507c3110c8509574ae03e0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-104111,Uid:2438b1d75fb1de3aa096517b67661add,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722259706770903228,Labels:map[string]string{component: kube-scheduler,io.kub
ernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2438b1d75fb1de3aa096517b67661add,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2438b1d75fb1de3aa096517b67661add,kubernetes.io/config.seen: 2024-07-29T13:28:26.319081129Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5b9bfae5-1825-43ea-8db5-781b7ea47942 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 13:34:47 ha-104111 crio[679]: time="2024-07-29 13:34:47.242056787Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d14dd81-fff1-41b0-872b-ab0094d0f4d4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:34:47 ha-104111 crio[679]: time="2024-07-29 13:34:47.242344142Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d14dd81-fff1-41b0-872b-ab0094d0f4d4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:34:47 ha-104111 crio[679]: time="2024-07-29 13:34:47.242736780Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2a033e8feb22aa74a1673da73b3c5bab08248299304e6a34a7f7064468eeb5d,PodSandboxId:ab591310b6636199927f6aca9abcc3a68cb2149e7f2001e4ffcd7ce08d966de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722259875257833512,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7xsjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fbdab29-6a6d-4b47-8df5-641b9aad98f0,},Annotations:map[string]string{io.kubernetes.container.hash: 769a90,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b86114506804a9a93ef3ca6b2254579d26919db666922aefc5ccef849a81f98,PodSandboxId:53fbe065ff1134e62b3e0364c43521280c8ec8461e8fc752dc336ac259ee602f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722259743370087937,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61cc52e-771b-484a-99d6-8963665cb1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 492a79cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721762ac4017a84454febe3fd71ab6671be9e230d7b785627b43bdafe8478d56,PodSandboxId:d0c4b0845fee9c8c5a409d1d96017b0e56e37a4fb5f685b4e69bc4626c12ffd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259743321034127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9jrnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453ed97-efb4-41c1-8bfb-e7e004e618e0,},Annotations:map[string]string{io.kubernetes.container.hash: 72f497a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81eca3ce5b15d81536c74dc285118c00ec013710df992caad1867b7c5e7f75a1,PodSandboxId:32a1e9c01260e16fc517c9caebdd1556fbbfcaabc5ff83ba679e2ce763d3ee50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259743267402214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gcf7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196981ba-ed16
-427c-ae8b-9b7e8ff36be2,},Annotations:map[string]string{io.kubernetes.container.hash: 4a624f81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcba14c355c5fc17ddee3394c79f5ffaea681079c29edd33b122d7aa80c36f1,PodSandboxId:6b9961791d750fd9d9a7d40cf02ff0c0f6e938c724b2b0787ebeb23a431b9beb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONT
AINER_RUNNING,CreatedAt:1722259731581771295,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9phpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e9c45f-5176-492e-90c7-49b0201afe1e,},Annotations:map[string]string{io.kubernetes.container.hash: 867b7308,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc357136c66b6120efe2eee1197a8d0dabec7279ff50cf8ddea25182b0d4ae8,PodSandboxId:60e3f945e9e89cf01f1a900e939ca51214ea0d79a2a69da731b49606960a6d05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:17222597280
97863200,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6kkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,},Annotations:map[string]string{io.kubernetes.container.hash: 32c1dc3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50fe26dbcca1ab5b9d9fe7412b4950133069510ae36d9795ccd4866be11174cd,PodSandboxId:5433f25fd3565411866feae928f058891d075bb4e913fce385b7e59dd41bdaeb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722259709027
885819,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143501fa8691d69c4a62f32dafe175d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e80af660361f5449de6286725b48cc816a32581672735b35e4ac2c55495983d1,PodSandboxId:a33809de7d3a6efb269fca0ca670a49eb3a11c9845507c3110c8509574ae03e0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722259707029920791,Labels:map[string]string{i
o.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2438b1d75fb1de3aa096517b67661add,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cce61f41e5d4c78880527c2ae828948f376f275ed748786922e82b85a740b3,PodSandboxId:052a439cf2eb6a200a6faaec78d6da0295c4f4ef92020ac5a8e57df53f19392e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722259707048079889,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc73c358411265f24a0fdb288ab5434e,},Annotations:map[string]string{io.kubernetes.container.hash: fa230ea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a9167ef54b81a6562000186bed478646762de7fef6329053c2018869987fdda,PodSandboxId:46fd4a1e9c0a6fe5d0cb0f78a6b0ed82fff4a6aeec47ec1a58187cc16e899e57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722259707040230251,Labels:map[string]string{io.kubernetes.container.name: kube
-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdd0eae9701cf7d4013ed5835b6fc65,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7606e1f107d6cded50cb09c9c101a2cac785cbcf697b2ffdecb599d2e148de2a,PodSandboxId:3c847132555037fab395549f498d9f9aad2f651da1470981906bc62a560c615c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722259706969247554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernete
s.pod.name: etcd-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cb06508783f1cdddfbd3cd4c58d73c,},Annotations:map[string]string{io.kubernetes.container.hash: 9edecd9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d14dd81-fff1-41b0-872b-ab0094d0f4d4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d2a033e8feb22       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   ab591310b6636       busybox-fc5497c4f-7xsjn
	1b86114506804       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   53fbe065ff113       storage-provisioner
	721762ac4017a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   d0c4b0845fee9       coredns-7db6d8ff4d-9jrnl
	81eca3ce5b15d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   32a1e9c01260e       coredns-7db6d8ff4d-gcf7q
	8fcba14c355c5       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    5 minutes ago       Running             kindnet-cni               0                   6b9961791d750       kindnet-9phpm
	6bc357136c66b       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      5 minutes ago       Running             kube-proxy                0                   60e3f945e9e89       kube-proxy-n6kkf
	50fe26dbcca1a       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   5433f25fd3565       kube-vip-ha-104111
	e4cce61f41e5d       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      6 minutes ago       Running             kube-apiserver            0                   052a439cf2eb6       kube-apiserver-ha-104111
	8a9167ef54b81       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      6 minutes ago       Running             kube-controller-manager   0                   46fd4a1e9c0a6       kube-controller-manager-ha-104111
	e80af660361f5       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      6 minutes ago       Running             kube-scheduler            0                   a33809de7d3a6       kube-scheduler-ha-104111
	7606e1f107d6c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   3c84713255503       etcd-ha-104111
	
	
	==> coredns [721762ac4017a84454febe3fd71ab6671be9e230d7b785627b43bdafe8478d56] <==
	[INFO] 10.244.0.4:51396 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003490725s
	[INFO] 10.244.0.4:37443 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000227588s
	[INFO] 10.244.0.4:33041 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000214614s
	[INFO] 10.244.2.2:60214 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209682s
	[INFO] 10.244.2.2:35659 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00147984s
	[INFO] 10.244.2.2:53135 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000226257s
	[INFO] 10.244.2.2:49731 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189094s
	[INFO] 10.244.2.2:47456 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130859s
	[INFO] 10.244.2.2:41111 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123604s
	[INFO] 10.244.1.2:55083 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114636s
	[INFO] 10.244.1.2:48422 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00109487s
	[INFO] 10.244.0.4:39213 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116126s
	[INFO] 10.244.0.4:33260 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068728s
	[INFO] 10.244.2.2:48083 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166018s
	[INFO] 10.244.2.2:58646 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172185s
	[INFO] 10.244.2.2:35393 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009321s
	[INFO] 10.244.1.2:57222 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116426s
	[INFO] 10.244.0.4:60530 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165705s
	[INFO] 10.244.0.4:35848 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000187393s
	[INFO] 10.244.0.4:34740 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000104846s
	[INFO] 10.244.2.2:55008 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000235338s
	[INFO] 10.244.2.2:47084 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000152504s
	[INFO] 10.244.2.2:39329 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000115623s
	[INFO] 10.244.1.2:57485 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001155s
	[INFO] 10.244.1.2:42349 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000100298s
	
	
	==> coredns [81eca3ce5b15d81536c74dc285118c00ec013710df992caad1867b7c5e7f75a1] <==
	[INFO] 10.244.2.2:37347 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001498018s
	[INFO] 10.244.1.2:37481 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000448308s
	[INFO] 10.244.0.4:51964 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000090052s
	[INFO] 10.244.0.4:47886 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000124742s
	[INFO] 10.244.0.4:34248 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.008049906s
	[INFO] 10.244.0.4:59749 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102743s
	[INFO] 10.244.0.4:46792 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124933s
	[INFO] 10.244.2.2:34901 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000159776s
	[INFO] 10.244.2.2:53333 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001076187s
	[INFO] 10.244.1.2:57672 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002003185s
	[INFO] 10.244.1.2:53227 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161629s
	[INFO] 10.244.1.2:38444 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092353s
	[INFO] 10.244.1.2:56499 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000211011s
	[INFO] 10.244.1.2:57556 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068457s
	[INFO] 10.244.1.2:34023 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109815s
	[INFO] 10.244.0.4:40329 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111231s
	[INFO] 10.244.0.4:38637 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005437s
	[INFO] 10.244.2.2:36810 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104372s
	[INFO] 10.244.1.2:53024 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122232s
	[INFO] 10.244.1.2:40257 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148245s
	[INFO] 10.244.1.2:41500 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080394s
	[INFO] 10.244.0.4:48915 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000151276s
	[INFO] 10.244.2.2:60231 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001284s
	[INFO] 10.244.1.2:33829 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154134s
	[INFO] 10.244.1.2:57945 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123045s
	
	
	==> describe nodes <==
	Name:               ha-104111
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-104111
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=ha-104111
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T13_28_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:28:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-104111
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:34:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:31:37 +0000   Mon, 29 Jul 2024 13:28:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:31:37 +0000   Mon, 29 Jul 2024 13:28:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:31:37 +0000   Mon, 29 Jul 2024 13:28:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:31:37 +0000   Mon, 29 Jul 2024 13:29:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.120
	  Hostname:    ha-104111
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 613eb8d959344be3989ec50055edd8a7
	  System UUID:                613eb8d9-5934-4be3-989e-c50055edd8a7
	  Boot ID:                    5cf31ff2-8a2f-47f5-8440-f13293b7049d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7xsjn              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  kube-system                 coredns-7db6d8ff4d-9jrnl             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m1s
	  kube-system                 coredns-7db6d8ff4d-gcf7q             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m1s
	  kube-system                 etcd-ha-104111                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m14s
	  kube-system                 kindnet-9phpm                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m1s
	  kube-system                 kube-apiserver-ha-104111             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 kube-controller-manager-ha-104111    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m16s
	  kube-system                 kube-proxy-n6kkf                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  kube-system                 kube-scheduler-ha-104111             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 kube-vip-ha-104111                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m59s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m21s (x7 over 6m21s)  kubelet          Node ha-104111 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m21s (x8 over 6m21s)  kubelet          Node ha-104111 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m21s (x8 over 6m21s)  kubelet          Node ha-104111 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m14s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m14s                  kubelet          Node ha-104111 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m14s                  kubelet          Node ha-104111 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m14s                  kubelet          Node ha-104111 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m2s                   node-controller  Node ha-104111 event: Registered Node ha-104111 in Controller
	  Normal  NodeReady                5m45s                  kubelet          Node ha-104111 status is now: NodeReady
	  Normal  RegisteredNode           4m56s                  node-controller  Node ha-104111 event: Registered Node ha-104111 in Controller
	  Normal  RegisteredNode           3m41s                  node-controller  Node ha-104111 event: Registered Node ha-104111 in Controller
	
	
	Name:               ha-104111-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-104111-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=ha-104111
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T13_29_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:29:32 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-104111-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:32:26 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 13:31:36 +0000   Mon, 29 Jul 2024 13:33:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 13:31:36 +0000   Mon, 29 Jul 2024 13:33:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 13:31:36 +0000   Mon, 29 Jul 2024 13:33:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 13:31:36 +0000   Mon, 29 Jul 2024 13:33:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.140
	  Hostname:    ha-104111-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0636dc68c5464326baedc11fd97131b2
	  System UUID:                0636dc68-c546-4326-baed-c11fd97131b2
	  Boot ID:                    0bd45770-0fe8-46cb-acfe-7c6dd18b1400
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-sf8mb                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  kube-system                 etcd-ha-104111-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m13s
	  kube-system                 kindnet-njndz                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m14s
	  kube-system                 kube-apiserver-ha-104111-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	  kube-system                 kube-controller-manager-ha-104111-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	  kube-system                 kube-proxy-5dnvv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 kube-scheduler-ha-104111-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-vip-ha-104111-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m10s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m14s (x8 over 5m15s)  kubelet          Node ha-104111-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m14s (x8 over 5m15s)  kubelet          Node ha-104111-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m14s (x7 over 5m15s)  kubelet          Node ha-104111-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m12s                  node-controller  Node ha-104111-m02 event: Registered Node ha-104111-m02 in Controller
	  Normal  RegisteredNode           4m56s                  node-controller  Node ha-104111-m02 event: Registered Node ha-104111-m02 in Controller
	  Normal  RegisteredNode           3m41s                  node-controller  Node ha-104111-m02 event: Registered Node ha-104111-m02 in Controller
	  Normal  NodeNotReady             97s                    node-controller  Node ha-104111-m02 status is now: NodeNotReady
	
	
	Name:               ha-104111-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-104111-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=ha-104111
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T13_30_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:30:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-104111-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:34:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:31:17 +0000   Mon, 29 Jul 2024 13:30:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:31:17 +0000   Mon, 29 Jul 2024 13:30:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:31:17 +0000   Mon, 29 Jul 2024 13:30:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:31:17 +0000   Mon, 29 Jul 2024 13:31:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.202
	  Hostname:    ha-104111-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 60c8ff09952242e0a709074b86dabf4c
	  System UUID:                60c8ff09-9522-42e0-a709-074b86dabf4c
	  Boot ID:                    fc349b62-d8b8-486e-8c1c-4a831212a0da
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cbdn4                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  kube-system                 etcd-ha-104111-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m59s
	  kube-system                 kindnet-mt9dk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m1s
	  kube-system                 kube-apiserver-ha-104111-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 kube-controller-manager-ha-104111-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 kube-proxy-m765x                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	  kube-system                 kube-scheduler-ha-104111-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 kube-vip-ha-104111-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m1s (x8 over 4m1s)  kubelet          Node ha-104111-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m1s (x8 over 4m1s)  kubelet          Node ha-104111-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m1s (x7 over 4m1s)  kubelet          Node ha-104111-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m57s                node-controller  Node ha-104111-m03 event: Registered Node ha-104111-m03 in Controller
	  Normal  RegisteredNode           3m56s                node-controller  Node ha-104111-m03 event: Registered Node ha-104111-m03 in Controller
	  Normal  RegisteredNode           3m41s                node-controller  Node ha-104111-m03 event: Registered Node ha-104111-m03 in Controller
	
	
	Name:               ha-104111-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-104111-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=ha-104111
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T13_31_48_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:31:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-104111-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:34:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:32:18 +0000   Mon, 29 Jul 2024 13:31:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:32:18 +0000   Mon, 29 Jul 2024 13:31:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:32:18 +0000   Mon, 29 Jul 2024 13:31:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:32:18 +0000   Mon, 29 Jul 2024 13:32:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.40
	  Hostname:    ha-104111-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f6a0723aec74e89b187376957d3127c
	  System UUID:                4f6a0723-aec7-4e89-b187-376957d3127c
	  Boot ID:                    c3e20e4f-136a-4900-a21d-f31b613ea791
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fbnbc       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m
	  kube-system                 kube-proxy-cmtgm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 2m55s            kube-proxy       
	  Normal  NodeHasSufficientMemory  3m (x2 over 3m)  kubelet          Node ha-104111-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x2 over 3m)  kubelet          Node ha-104111-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x2 over 3m)  kubelet          Node ha-104111-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m57s            node-controller  Node ha-104111-m04 event: Registered Node ha-104111-m04 in Controller
	  Normal  RegisteredNode           2m56s            node-controller  Node ha-104111-m04 event: Registered Node ha-104111-m04 in Controller
	  Normal  RegisteredNode           2m56s            node-controller  Node ha-104111-m04 event: Registered Node ha-104111-m04 in Controller
	  Normal  NodeReady                2m41s            kubelet          Node ha-104111-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul29 13:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050265] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039095] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.731082] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jul29 13:28] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.580943] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.840145] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.057324] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056430] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.166263] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.131459] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.268832] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.161114] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +3.971481] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.058723] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.299885] systemd-fstab-generator[1354]: Ignoring "noauto" option for root device
	[  +0.095957] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.972930] kauditd_printk_skb: 21 callbacks suppressed
	[Jul29 13:29] kauditd_printk_skb: 38 callbacks suppressed
	[ +36.765451] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [7606e1f107d6cded50cb09c9c101a2cac785cbcf697b2ffdecb599d2e148de2a] <==
	{"level":"warn","ts":"2024-07-29T13:34:47.144906Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:34:47.153791Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:34:47.153919Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:34:47.253538Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:34:47.493792Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:34:47.507071Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:34:47.516532Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:34:47.524204Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:34:47.528725Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:34:47.53227Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:34:47.540452Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:34:47.546856Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:34:47.552635Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:34:47.553832Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:34:47.55601Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:34:47.558829Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:34:47.566431Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:34:47.574304Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:34:47.579747Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:34:47.583838Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:34:47.588045Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:34:47.595166Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:34:47.60638Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:34:47.612434Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:34:47.654481Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 13:34:47 up 6 min,  0 users,  load average: 0.57, 0.62, 0.30
	Linux ha-104111 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8fcba14c355c5fc17ddee3394c79f5ffaea681079c29edd33b122d7aa80c36f1] <==
	I0729 13:34:12.636028       1 main.go:322] Node ha-104111-m04 has CIDR [10.244.3.0/24] 
	I0729 13:34:22.632675       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0729 13:34:22.632781       1 main.go:299] handling current node
	I0729 13:34:22.632809       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 13:34:22.632839       1 main.go:322] Node ha-104111-m02 has CIDR [10.244.1.0/24] 
	I0729 13:34:22.632995       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0729 13:34:22.633031       1 main.go:322] Node ha-104111-m03 has CIDR [10.244.2.0/24] 
	I0729 13:34:22.633117       1 main.go:295] Handling node with IPs: map[192.168.39.40:{}]
	I0729 13:34:22.633138       1 main.go:322] Node ha-104111-m04 has CIDR [10.244.3.0/24] 
	I0729 13:34:32.632452       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0729 13:34:32.632623       1 main.go:299] handling current node
	I0729 13:34:32.632656       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 13:34:32.632675       1 main.go:322] Node ha-104111-m02 has CIDR [10.244.1.0/24] 
	I0729 13:34:32.632830       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0729 13:34:32.632853       1 main.go:322] Node ha-104111-m03 has CIDR [10.244.2.0/24] 
	I0729 13:34:32.632923       1 main.go:295] Handling node with IPs: map[192.168.39.40:{}]
	I0729 13:34:32.632943       1 main.go:322] Node ha-104111-m04 has CIDR [10.244.3.0/24] 
	I0729 13:34:42.628300       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0729 13:34:42.628395       1 main.go:299] handling current node
	I0729 13:34:42.628427       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 13:34:42.628446       1 main.go:322] Node ha-104111-m02 has CIDR [10.244.1.0/24] 
	I0729 13:34:42.628676       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0729 13:34:42.628724       1 main.go:322] Node ha-104111-m03 has CIDR [10.244.2.0/24] 
	I0729 13:34:42.628858       1 main.go:295] Handling node with IPs: map[192.168.39.40:{}]
	I0729 13:34:42.628887       1 main.go:322] Node ha-104111-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [e4cce61f41e5d4c78880527c2ae828948f376f275ed748786922e82b85a740b3] <==
	I0729 13:28:31.879445       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0729 13:28:31.886487       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.120]
	I0729 13:28:31.887810       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 13:28:31.893218       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 13:28:32.148356       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 13:28:33.305261       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 13:28:33.328420       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0729 13:28:33.360296       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 13:28:46.103373       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0729 13:28:46.434987       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0729 13:31:17.232046       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43602: use of closed network connection
	E0729 13:31:17.428737       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43610: use of closed network connection
	E0729 13:31:17.614987       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43622: use of closed network connection
	E0729 13:31:17.800465       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43638: use of closed network connection
	E0729 13:31:18.002027       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43648: use of closed network connection
	E0729 13:31:18.196945       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43666: use of closed network connection
	E0729 13:31:18.375757       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43672: use of closed network connection
	E0729 13:31:18.572267       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43688: use of closed network connection
	E0729 13:31:18.747369       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43704: use of closed network connection
	E0729 13:31:19.036524       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43720: use of closed network connection
	E0729 13:31:19.223066       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43734: use of closed network connection
	E0729 13:31:19.395856       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43746: use of closed network connection
	E0729 13:31:19.572154       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43764: use of closed network connection
	E0729 13:31:19.751874       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47734: use of closed network connection
	W0729 13:32:41.896160       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.120 192.168.39.202]
	
	
	==> kube-controller-manager [8a9167ef54b81a6562000186bed478646762de7fef6329053c2018869987fdda] <==
	I0729 13:30:46.832484       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-104111-m03" podCIDRs=["10.244.2.0/24"]
	I0729 13:30:50.424699       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-104111-m03"
	I0729 13:31:13.771690       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.903666ms"
	I0729 13:31:13.805615       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.470887ms"
	I0729 13:31:13.805737       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.891µs"
	I0729 13:31:13.951864       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="117.326641ms"
	I0729 13:31:14.138975       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="186.965426ms"
	I0729 13:31:14.139190       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.694µs"
	I0729 13:31:14.244057       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.90239ms"
	I0729 13:31:14.244328       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="171.196µs"
	I0729 13:31:15.232746       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.393µs"
	I0729 13:31:15.998647       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.868229ms"
	I0729 13:31:15.998867       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.957µs"
	I0729 13:31:16.138932       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.195922ms"
	I0729 13:31:16.139066       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.833µs"
	I0729 13:31:16.764015       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.067893ms"
	I0729 13:31:16.764302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="115.513µs"
	E0729 13:31:47.430090       1 certificate_controller.go:146] Sync csr-p7x6f failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-p7x6f": the object has been modified; please apply your changes to the latest version and try again
	I0729 13:31:47.708092       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-104111-m04\" does not exist"
	I0729 13:31:47.754896       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-104111-m04" podCIDRs=["10.244.3.0/24"]
	I0729 13:31:50.456726       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-104111-m04"
	I0729 13:32:06.415875       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-104111-m04"
	I0729 13:33:10.502660       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-104111-m04"
	I0729 13:33:10.685858       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.213158ms"
	I0729 13:33:10.686142       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="133.532µs"
	
	
	==> kube-proxy [6bc357136c66b6120efe2eee1197a8d0dabec7279ff50cf8ddea25182b0d4ae8] <==
	I0729 13:28:48.318203       1 server_linux.go:69] "Using iptables proxy"
	I0729 13:28:48.335952       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.120"]
	I0729 13:28:48.436864       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 13:28:48.436915       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 13:28:48.437047       1 server_linux.go:165] "Using iptables Proxier"
	I0729 13:28:48.441369       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 13:28:48.441837       1 server.go:872] "Version info" version="v1.30.3"
	I0729 13:28:48.441853       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:28:48.443654       1 config.go:101] "Starting endpoint slice config controller"
	I0729 13:28:48.444098       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 13:28:48.444247       1 config.go:192] "Starting service config controller"
	I0729 13:28:48.444277       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 13:28:48.445332       1 config.go:319] "Starting node config controller"
	I0729 13:28:48.445542       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 13:28:48.545120       1 shared_informer.go:320] Caches are synced for service config
	I0729 13:28:48.545236       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 13:28:48.549600       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e80af660361f5449de6286725b48cc816a32581672735b35e4ac2c55495983d1] <==
	W0729 13:28:31.143239       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 13:28:31.143337       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 13:28:31.190121       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 13:28:31.190215       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 13:28:31.190453       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 13:28:31.190497       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 13:28:31.215678       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 13:28:31.216250       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 13:28:31.420427       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 13:28:31.420483       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 13:28:31.494455       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 13:28:31.494504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0729 13:28:33.383078       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 13:30:46.961298       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mvvrc\": pod kindnet-mvvrc is already assigned to node \"ha-104111-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-mvvrc" node="ha-104111-m03"
	E0729 13:30:46.962268       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 2166a71d-cd5a-4e07-b827-a4789ca6b3c5(kube-system/kindnet-mvvrc) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-mvvrc"
	E0729 13:30:46.962360       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mvvrc\": pod kindnet-mvvrc is already assigned to node \"ha-104111-m03\"" pod="kube-system/kindnet-mvvrc"
	I0729 13:30:46.962417       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mvvrc" node="ha-104111-m03"
	E0729 13:30:46.969946       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-sdksp\": pod kube-proxy-sdksp is already assigned to node \"ha-104111-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-sdksp" node="ha-104111-m03"
	E0729 13:30:46.970015       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 6018426d-cebb-4c86-a261-c760ae46b755(kube-system/kube-proxy-sdksp) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-sdksp"
	E0729 13:30:46.970035       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-sdksp\": pod kube-proxy-sdksp is already assigned to node \"ha-104111-m03\"" pod="kube-system/kube-proxy-sdksp"
	I0729 13:30:46.970053       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-sdksp" node="ha-104111-m03"
	E0729 13:31:47.773124       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-fbnbc\": pod kindnet-fbnbc is already assigned to node \"ha-104111-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-fbnbc" node="ha-104111-m04"
	E0729 13:31:47.773246       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-fbnbc\": pod kindnet-fbnbc is already assigned to node \"ha-104111-m04\"" pod="kube-system/kindnet-fbnbc"
	E0729 13:31:47.773666       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-cmtgm\": pod kube-proxy-cmtgm is already assigned to node \"ha-104111-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-cmtgm" node="ha-104111-m04"
	E0729 13:31:47.773724       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-cmtgm\": pod kube-proxy-cmtgm is already assigned to node \"ha-104111-m04\"" pod="kube-system/kube-proxy-cmtgm"
	
	
	==> kubelet <==
	Jul 29 13:30:33 ha-104111 kubelet[1361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:30:33 ha-104111 kubelet[1361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:31:13 ha-104111 kubelet[1361]: I0729 13:31:13.776648    1361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9jrnl" podStartSLOduration=147.776424404 podStartE2EDuration="2m27.776424404s" podCreationTimestamp="2024-07-29 13:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-29 13:29:04.499017265 +0000 UTC m=+31.404501824" watchObservedRunningTime="2024-07-29 13:31:13.776424404 +0000 UTC m=+160.681908961"
	Jul 29 13:31:13 ha-104111 kubelet[1361]: I0729 13:31:13.777648    1361 topology_manager.go:215] "Topology Admit Handler" podUID="1fbdab29-6a6d-4b47-8df5-641b9aad98f0" podNamespace="default" podName="busybox-fc5497c4f-7xsjn"
	Jul 29 13:31:13 ha-104111 kubelet[1361]: I0729 13:31:13.802468    1361 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9tcc\" (UniqueName: \"kubernetes.io/projected/1fbdab29-6a6d-4b47-8df5-641b9aad98f0-kube-api-access-c9tcc\") pod \"busybox-fc5497c4f-7xsjn\" (UID: \"1fbdab29-6a6d-4b47-8df5-641b9aad98f0\") " pod="default/busybox-fc5497c4f-7xsjn"
	Jul 29 13:31:33 ha-104111 kubelet[1361]: E0729 13:31:33.264239    1361 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:31:33 ha-104111 kubelet[1361]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:31:33 ha-104111 kubelet[1361]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:31:33 ha-104111 kubelet[1361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:31:33 ha-104111 kubelet[1361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:32:33 ha-104111 kubelet[1361]: E0729 13:32:33.269290    1361 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:32:33 ha-104111 kubelet[1361]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:32:33 ha-104111 kubelet[1361]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:32:33 ha-104111 kubelet[1361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:32:33 ha-104111 kubelet[1361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:33:33 ha-104111 kubelet[1361]: E0729 13:33:33.265694    1361 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:33:33 ha-104111 kubelet[1361]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:33:33 ha-104111 kubelet[1361]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:33:33 ha-104111 kubelet[1361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:33:33 ha-104111 kubelet[1361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:34:33 ha-104111 kubelet[1361]: E0729 13:34:33.264440    1361 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:34:33 ha-104111 kubelet[1361]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:34:33 ha-104111 kubelet[1361]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:34:33 ha-104111 kubelet[1361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:34:33 ha-104111 kubelet[1361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-104111 -n ha-104111
helpers_test.go:261: (dbg) Run:  kubectl --context ha-104111 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (50.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-104111 status -v=7 --alsologtostderr: exit status 3 (3.195984912s)

                                                
                                                
-- stdout --
	ha-104111
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-104111-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-104111-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-104111-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:34:52.215259  997646 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:34:52.215377  997646 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:34:52.215386  997646 out.go:304] Setting ErrFile to fd 2...
	I0729 13:34:52.215390  997646 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:34:52.215556  997646 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 13:34:52.215744  997646 out.go:298] Setting JSON to false
	I0729 13:34:52.215774  997646 mustload.go:65] Loading cluster: ha-104111
	I0729 13:34:52.215924  997646 notify.go:220] Checking for updates...
	I0729 13:34:52.216136  997646 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:34:52.216151  997646 status.go:255] checking status of ha-104111 ...
	I0729 13:34:52.216572  997646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:52.216638  997646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:52.232792  997646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44737
	I0729 13:34:52.233275  997646 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:52.233845  997646 main.go:141] libmachine: Using API Version  1
	I0729 13:34:52.233871  997646 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:52.234256  997646 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:52.234437  997646 main.go:141] libmachine: (ha-104111) Calling .GetState
	I0729 13:34:52.236112  997646 status.go:330] ha-104111 host status = "Running" (err=<nil>)
	I0729 13:34:52.236131  997646 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:34:52.236447  997646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:52.236503  997646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:52.250964  997646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45327
	I0729 13:34:52.251384  997646 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:52.251861  997646 main.go:141] libmachine: Using API Version  1
	I0729 13:34:52.251884  997646 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:52.252164  997646 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:52.252316  997646 main.go:141] libmachine: (ha-104111) Calling .GetIP
	I0729 13:34:52.254870  997646 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:34:52.255301  997646 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:34:52.255336  997646 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:34:52.255493  997646 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:34:52.255809  997646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:52.255852  997646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:52.271237  997646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34263
	I0729 13:34:52.271694  997646 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:52.272351  997646 main.go:141] libmachine: Using API Version  1
	I0729 13:34:52.272376  997646 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:52.272790  997646 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:52.273088  997646 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:34:52.273345  997646 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:34:52.273429  997646 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:34:52.276396  997646 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:34:52.276894  997646 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:34:52.276938  997646 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:34:52.277076  997646 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:34:52.277239  997646 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:34:52.277395  997646 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:34:52.277537  997646 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:34:52.367511  997646 ssh_runner.go:195] Run: systemctl --version
	I0729 13:34:52.374535  997646 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:34:52.390127  997646 kubeconfig.go:125] found "ha-104111" server: "https://192.168.39.254:8443"
	I0729 13:34:52.390161  997646 api_server.go:166] Checking apiserver status ...
	I0729 13:34:52.390206  997646 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:34:52.404725  997646 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup
	W0729 13:34:52.414219  997646 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:34:52.414282  997646 ssh_runner.go:195] Run: ls
	I0729 13:34:52.420012  997646 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 13:34:52.425033  997646 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 13:34:52.425058  997646 status.go:422] ha-104111 apiserver status = Running (err=<nil>)
	I0729 13:34:52.425071  997646 status.go:257] ha-104111 status: &{Name:ha-104111 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:34:52.425096  997646 status.go:255] checking status of ha-104111-m02 ...
	I0729 13:34:52.425380  997646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:52.425427  997646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:52.441085  997646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38117
	I0729 13:34:52.441545  997646 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:52.442087  997646 main.go:141] libmachine: Using API Version  1
	I0729 13:34:52.442108  997646 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:52.442459  997646 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:52.442623  997646 main.go:141] libmachine: (ha-104111-m02) Calling .GetState
	I0729 13:34:52.444124  997646 status.go:330] ha-104111-m02 host status = "Running" (err=<nil>)
	I0729 13:34:52.444144  997646 host.go:66] Checking if "ha-104111-m02" exists ...
	I0729 13:34:52.444466  997646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:52.444516  997646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:52.459096  997646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41435
	I0729 13:34:52.459438  997646 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:52.459872  997646 main.go:141] libmachine: Using API Version  1
	I0729 13:34:52.459893  997646 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:52.460189  997646 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:52.460349  997646 main.go:141] libmachine: (ha-104111-m02) Calling .GetIP
	I0729 13:34:52.462688  997646 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:34:52.463130  997646 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:34:52.463163  997646 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:34:52.463213  997646 host.go:66] Checking if "ha-104111-m02" exists ...
	I0729 13:34:52.463491  997646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:52.463524  997646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:52.478159  997646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41543
	I0729 13:34:52.478560  997646 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:52.479031  997646 main.go:141] libmachine: Using API Version  1
	I0729 13:34:52.479051  997646 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:52.479336  997646 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:52.479499  997646 main.go:141] libmachine: (ha-104111-m02) Calling .DriverName
	I0729 13:34:52.479689  997646 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:34:52.479719  997646 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:34:52.482613  997646 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:34:52.483061  997646 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:34:52.483082  997646 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:34:52.483232  997646 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:34:52.483402  997646 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:34:52.483542  997646 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:34:52.483663  997646 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/id_rsa Username:docker}
	W0729 13:34:55.012681  997646 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.140:22: connect: no route to host
	W0729 13:34:55.012794  997646 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	E0729 13:34:55.012811  997646 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	I0729 13:34:55.012819  997646 status.go:257] ha-104111-m02 status: &{Name:ha-104111-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 13:34:55.012844  997646 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	I0729 13:34:55.012857  997646 status.go:255] checking status of ha-104111-m03 ...
	I0729 13:34:55.013172  997646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:55.013215  997646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:55.029245  997646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44223
	I0729 13:34:55.029674  997646 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:55.030141  997646 main.go:141] libmachine: Using API Version  1
	I0729 13:34:55.030172  997646 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:55.030490  997646 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:55.030716  997646 main.go:141] libmachine: (ha-104111-m03) Calling .GetState
	I0729 13:34:55.032183  997646 status.go:330] ha-104111-m03 host status = "Running" (err=<nil>)
	I0729 13:34:55.032200  997646 host.go:66] Checking if "ha-104111-m03" exists ...
	I0729 13:34:55.032598  997646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:55.032641  997646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:55.047129  997646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35201
	I0729 13:34:55.047573  997646 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:55.048048  997646 main.go:141] libmachine: Using API Version  1
	I0729 13:34:55.048118  997646 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:55.048503  997646 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:55.048690  997646 main.go:141] libmachine: (ha-104111-m03) Calling .GetIP
	I0729 13:34:55.051393  997646 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:34:55.051786  997646 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:34:55.051809  997646 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:34:55.051973  997646 host.go:66] Checking if "ha-104111-m03" exists ...
	I0729 13:34:55.052293  997646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:55.052326  997646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:55.066909  997646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37605
	I0729 13:34:55.067236  997646 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:55.067703  997646 main.go:141] libmachine: Using API Version  1
	I0729 13:34:55.067726  997646 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:55.068045  997646 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:55.068225  997646 main.go:141] libmachine: (ha-104111-m03) Calling .DriverName
	I0729 13:34:55.068422  997646 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:34:55.068460  997646 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:34:55.071069  997646 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:34:55.071631  997646 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:34:55.071659  997646 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:34:55.071823  997646 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:34:55.071981  997646 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:34:55.072123  997646 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:34:55.072237  997646 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/id_rsa Username:docker}
	I0729 13:34:55.156957  997646 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:34:55.173732  997646 kubeconfig.go:125] found "ha-104111" server: "https://192.168.39.254:8443"
	I0729 13:34:55.173764  997646 api_server.go:166] Checking apiserver status ...
	I0729 13:34:55.173806  997646 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:34:55.188301  997646 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1544/cgroup
	W0729 13:34:55.197796  997646 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1544/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:34:55.197845  997646 ssh_runner.go:195] Run: ls
	I0729 13:34:55.203007  997646 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 13:34:55.207637  997646 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 13:34:55.207661  997646 status.go:422] ha-104111-m03 apiserver status = Running (err=<nil>)
	I0729 13:34:55.207669  997646 status.go:257] ha-104111-m03 status: &{Name:ha-104111-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:34:55.207684  997646 status.go:255] checking status of ha-104111-m04 ...
	I0729 13:34:55.208037  997646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:55.208083  997646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:55.223206  997646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38163
	I0729 13:34:55.223648  997646 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:55.224142  997646 main.go:141] libmachine: Using API Version  1
	I0729 13:34:55.224165  997646 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:55.224558  997646 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:55.224771  997646 main.go:141] libmachine: (ha-104111-m04) Calling .GetState
	I0729 13:34:55.226589  997646 status.go:330] ha-104111-m04 host status = "Running" (err=<nil>)
	I0729 13:34:55.226611  997646 host.go:66] Checking if "ha-104111-m04" exists ...
	I0729 13:34:55.227021  997646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:55.227064  997646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:55.242000  997646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37887
	I0729 13:34:55.242397  997646 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:55.242908  997646 main.go:141] libmachine: Using API Version  1
	I0729 13:34:55.242935  997646 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:55.243293  997646 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:55.243511  997646 main.go:141] libmachine: (ha-104111-m04) Calling .GetIP
	I0729 13:34:55.246450  997646 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:34:55.246851  997646 main.go:141] libmachine: (ha-104111-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:31:bf", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:31:34 +0000 UTC Type:0 Mac:52:54:00:c2:31:bf Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-104111-m04 Clientid:01:52:54:00:c2:31:bf}
	I0729 13:34:55.246881  997646 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:34:55.247001  997646 host.go:66] Checking if "ha-104111-m04" exists ...
	I0729 13:34:55.247297  997646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:55.247346  997646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:55.261911  997646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40241
	I0729 13:34:55.262356  997646 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:55.262907  997646 main.go:141] libmachine: Using API Version  1
	I0729 13:34:55.262933  997646 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:55.263219  997646 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:55.263419  997646 main.go:141] libmachine: (ha-104111-m04) Calling .DriverName
	I0729 13:34:55.263595  997646 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:34:55.263615  997646 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHHostname
	I0729 13:34:55.266272  997646 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:34:55.266710  997646 main.go:141] libmachine: (ha-104111-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:31:bf", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:31:34 +0000 UTC Type:0 Mac:52:54:00:c2:31:bf Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-104111-m04 Clientid:01:52:54:00:c2:31:bf}
	I0729 13:34:55.266739  997646 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:34:55.266897  997646 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHPort
	I0729 13:34:55.267050  997646 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHKeyPath
	I0729 13:34:55.267190  997646 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHUsername
	I0729 13:34:55.267335  997646 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m04/id_rsa Username:docker}
	I0729 13:34:55.348743  997646 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:34:55.364235  997646 status.go:257] ha-104111-m04 status: &{Name:ha-104111-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-104111 status -v=7 --alsologtostderr: exit status 3 (5.146378022s)

                                                
                                                
-- stdout --
	ha-104111
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-104111-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-104111-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-104111-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:34:56.415870  997746 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:34:56.416117  997746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:34:56.416125  997746 out.go:304] Setting ErrFile to fd 2...
	I0729 13:34:56.416130  997746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:34:56.416357  997746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 13:34:56.416573  997746 out.go:298] Setting JSON to false
	I0729 13:34:56.416612  997746 mustload.go:65] Loading cluster: ha-104111
	I0729 13:34:56.416669  997746 notify.go:220] Checking for updates...
	I0729 13:34:56.417045  997746 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:34:56.417068  997746 status.go:255] checking status of ha-104111 ...
	I0729 13:34:56.417548  997746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:56.417598  997746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:56.438612  997746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46551
	I0729 13:34:56.439024  997746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:56.439705  997746 main.go:141] libmachine: Using API Version  1
	I0729 13:34:56.439743  997746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:56.440160  997746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:56.440325  997746 main.go:141] libmachine: (ha-104111) Calling .GetState
	I0729 13:34:56.441923  997746 status.go:330] ha-104111 host status = "Running" (err=<nil>)
	I0729 13:34:56.441943  997746 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:34:56.442222  997746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:56.442262  997746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:56.457641  997746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45791
	I0729 13:34:56.458102  997746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:56.458567  997746 main.go:141] libmachine: Using API Version  1
	I0729 13:34:56.458603  997746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:56.458983  997746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:56.459151  997746 main.go:141] libmachine: (ha-104111) Calling .GetIP
	I0729 13:34:56.462001  997746 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:34:56.462379  997746 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:34:56.462403  997746 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:34:56.462571  997746 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:34:56.462856  997746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:56.462890  997746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:56.477693  997746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43005
	I0729 13:34:56.478052  997746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:56.478505  997746 main.go:141] libmachine: Using API Version  1
	I0729 13:34:56.478533  997746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:56.478826  997746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:56.479029  997746 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:34:56.479224  997746 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:34:56.479248  997746 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:34:56.482310  997746 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:34:56.482770  997746 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:34:56.482802  997746 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:34:56.482960  997746 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:34:56.483151  997746 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:34:56.483330  997746 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:34:56.483467  997746 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:34:56.568659  997746 ssh_runner.go:195] Run: systemctl --version
	I0729 13:34:56.574812  997746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:34:56.594194  997746 kubeconfig.go:125] found "ha-104111" server: "https://192.168.39.254:8443"
	I0729 13:34:56.594229  997746 api_server.go:166] Checking apiserver status ...
	I0729 13:34:56.594265  997746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:34:56.609253  997746 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup
	W0729 13:34:56.620475  997746 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:34:56.620561  997746 ssh_runner.go:195] Run: ls
	I0729 13:34:56.626695  997746 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 13:34:56.631036  997746 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 13:34:56.631062  997746 status.go:422] ha-104111 apiserver status = Running (err=<nil>)
	I0729 13:34:56.631074  997746 status.go:257] ha-104111 status: &{Name:ha-104111 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:34:56.631101  997746 status.go:255] checking status of ha-104111-m02 ...
	I0729 13:34:56.631405  997746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:56.631462  997746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:56.647953  997746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45381
	I0729 13:34:56.648372  997746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:56.648888  997746 main.go:141] libmachine: Using API Version  1
	I0729 13:34:56.648913  997746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:56.649269  997746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:56.649477  997746 main.go:141] libmachine: (ha-104111-m02) Calling .GetState
	I0729 13:34:56.651145  997746 status.go:330] ha-104111-m02 host status = "Running" (err=<nil>)
	I0729 13:34:56.651165  997746 host.go:66] Checking if "ha-104111-m02" exists ...
	I0729 13:34:56.651510  997746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:56.651561  997746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:56.666934  997746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44677
	I0729 13:34:56.667351  997746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:56.667880  997746 main.go:141] libmachine: Using API Version  1
	I0729 13:34:56.667902  997746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:56.668233  997746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:56.668451  997746 main.go:141] libmachine: (ha-104111-m02) Calling .GetIP
	I0729 13:34:56.671300  997746 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:34:56.671734  997746 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:34:56.671762  997746 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:34:56.671874  997746 host.go:66] Checking if "ha-104111-m02" exists ...
	I0729 13:34:56.672180  997746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:34:56.672217  997746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:34:56.687465  997746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46297
	I0729 13:34:56.687937  997746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:34:56.688443  997746 main.go:141] libmachine: Using API Version  1
	I0729 13:34:56.688469  997746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:34:56.688785  997746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:34:56.688977  997746 main.go:141] libmachine: (ha-104111-m02) Calling .DriverName
	I0729 13:34:56.689156  997746 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:34:56.689174  997746 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:34:56.691740  997746 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:34:56.692184  997746 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:34:56.692220  997746 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:34:56.692349  997746 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:34:56.692547  997746 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:34:56.692671  997746 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:34:56.692796  997746 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/id_rsa Username:docker}
	W0729 13:34:58.088704  997746 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.140:22: connect: no route to host
	I0729 13:34:58.088763  997746 retry.go:31] will retry after 186.49429ms: dial tcp 192.168.39.140:22: connect: no route to host
	W0729 13:35:01.156679  997746 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.140:22: connect: no route to host
	W0729 13:35:01.156796  997746 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	E0729 13:35:01.156829  997746 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	I0729 13:35:01.156837  997746 status.go:257] ha-104111-m02 status: &{Name:ha-104111-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 13:35:01.156858  997746 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	I0729 13:35:01.156865  997746 status.go:255] checking status of ha-104111-m03 ...
	I0729 13:35:01.157188  997746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:01.157236  997746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:01.173042  997746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43755
	I0729 13:35:01.173546  997746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:01.174187  997746 main.go:141] libmachine: Using API Version  1
	I0729 13:35:01.174216  997746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:01.174544  997746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:01.174767  997746 main.go:141] libmachine: (ha-104111-m03) Calling .GetState
	I0729 13:35:01.176508  997746 status.go:330] ha-104111-m03 host status = "Running" (err=<nil>)
	I0729 13:35:01.176529  997746 host.go:66] Checking if "ha-104111-m03" exists ...
	I0729 13:35:01.176979  997746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:01.177045  997746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:01.191789  997746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37777
	I0729 13:35:01.192264  997746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:01.192796  997746 main.go:141] libmachine: Using API Version  1
	I0729 13:35:01.192820  997746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:01.193194  997746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:01.193405  997746 main.go:141] libmachine: (ha-104111-m03) Calling .GetIP
	I0729 13:35:01.196149  997746 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:35:01.196600  997746 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:35:01.196629  997746 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:35:01.196777  997746 host.go:66] Checking if "ha-104111-m03" exists ...
	I0729 13:35:01.197094  997746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:01.197131  997746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:01.212109  997746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44685
	I0729 13:35:01.212550  997746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:01.213068  997746 main.go:141] libmachine: Using API Version  1
	I0729 13:35:01.213093  997746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:01.213446  997746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:01.213692  997746 main.go:141] libmachine: (ha-104111-m03) Calling .DriverName
	I0729 13:35:01.213914  997746 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:35:01.213937  997746 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:35:01.216776  997746 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:35:01.217210  997746 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:35:01.217237  997746 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:35:01.217382  997746 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:35:01.217553  997746 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:35:01.217698  997746 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:35:01.217832  997746 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/id_rsa Username:docker}
	I0729 13:35:01.299871  997746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:35:01.314534  997746 kubeconfig.go:125] found "ha-104111" server: "https://192.168.39.254:8443"
	I0729 13:35:01.314578  997746 api_server.go:166] Checking apiserver status ...
	I0729 13:35:01.314623  997746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:35:01.328952  997746 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1544/cgroup
	W0729 13:35:01.341208  997746 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1544/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:35:01.341293  997746 ssh_runner.go:195] Run: ls
	I0729 13:35:01.346061  997746 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 13:35:01.352277  997746 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 13:35:01.352306  997746 status.go:422] ha-104111-m03 apiserver status = Running (err=<nil>)
	I0729 13:35:01.352317  997746 status.go:257] ha-104111-m03 status: &{Name:ha-104111-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:35:01.352332  997746 status.go:255] checking status of ha-104111-m04 ...
	I0729 13:35:01.352726  997746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:01.352778  997746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:01.367730  997746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I0729 13:35:01.368168  997746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:01.368708  997746 main.go:141] libmachine: Using API Version  1
	I0729 13:35:01.368772  997746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:01.369117  997746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:01.369302  997746 main.go:141] libmachine: (ha-104111-m04) Calling .GetState
	I0729 13:35:01.370791  997746 status.go:330] ha-104111-m04 host status = "Running" (err=<nil>)
	I0729 13:35:01.370808  997746 host.go:66] Checking if "ha-104111-m04" exists ...
	I0729 13:35:01.371089  997746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:01.371119  997746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:01.387001  997746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44849
	I0729 13:35:01.387448  997746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:01.387928  997746 main.go:141] libmachine: Using API Version  1
	I0729 13:35:01.387950  997746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:01.388289  997746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:01.388512  997746 main.go:141] libmachine: (ha-104111-m04) Calling .GetIP
	I0729 13:35:01.391086  997746 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:35:01.391575  997746 main.go:141] libmachine: (ha-104111-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:31:bf", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:31:34 +0000 UTC Type:0 Mac:52:54:00:c2:31:bf Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-104111-m04 Clientid:01:52:54:00:c2:31:bf}
	I0729 13:35:01.391602  997746 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:35:01.391757  997746 host.go:66] Checking if "ha-104111-m04" exists ...
	I0729 13:35:01.392048  997746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:01.392089  997746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:01.408070  997746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33221
	I0729 13:35:01.408503  997746 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:01.408977  997746 main.go:141] libmachine: Using API Version  1
	I0729 13:35:01.408999  997746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:01.409325  997746 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:01.409549  997746 main.go:141] libmachine: (ha-104111-m04) Calling .DriverName
	I0729 13:35:01.409779  997746 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:35:01.409803  997746 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHHostname
	I0729 13:35:01.412725  997746 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:35:01.413103  997746 main.go:141] libmachine: (ha-104111-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:31:bf", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:31:34 +0000 UTC Type:0 Mac:52:54:00:c2:31:bf Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-104111-m04 Clientid:01:52:54:00:c2:31:bf}
	I0729 13:35:01.413131  997746 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:35:01.413241  997746 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHPort
	I0729 13:35:01.413420  997746 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHKeyPath
	I0729 13:35:01.413596  997746 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHUsername
	I0729 13:35:01.413741  997746 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m04/id_rsa Username:docker}
	I0729 13:35:01.500204  997746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:35:01.514994  997746 status.go:257] ha-104111-m04 status: &{Name:ha-104111-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-104111 status -v=7 --alsologtostderr: exit status 3 (5.073822016s)

                                                
                                                
-- stdout --
	ha-104111
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-104111-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-104111-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-104111-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:35:02.625777  997846 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:35:02.626069  997846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:35:02.626080  997846 out.go:304] Setting ErrFile to fd 2...
	I0729 13:35:02.626083  997846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:35:02.626343  997846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 13:35:02.626565  997846 out.go:298] Setting JSON to false
	I0729 13:35:02.626599  997846 mustload.go:65] Loading cluster: ha-104111
	I0729 13:35:02.626668  997846 notify.go:220] Checking for updates...
	I0729 13:35:02.627045  997846 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:35:02.627061  997846 status.go:255] checking status of ha-104111 ...
	I0729 13:35:02.627549  997846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:02.627679  997846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:02.644574  997846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43193
	I0729 13:35:02.645005  997846 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:02.645612  997846 main.go:141] libmachine: Using API Version  1
	I0729 13:35:02.645644  997846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:02.646023  997846 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:02.646230  997846 main.go:141] libmachine: (ha-104111) Calling .GetState
	I0729 13:35:02.647961  997846 status.go:330] ha-104111 host status = "Running" (err=<nil>)
	I0729 13:35:02.647981  997846 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:35:02.648304  997846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:02.648351  997846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:02.663610  997846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41965
	I0729 13:35:02.664047  997846 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:02.664653  997846 main.go:141] libmachine: Using API Version  1
	I0729 13:35:02.664674  997846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:02.664967  997846 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:02.665161  997846 main.go:141] libmachine: (ha-104111) Calling .GetIP
	I0729 13:35:02.667939  997846 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:35:02.668354  997846 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:35:02.668383  997846 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:35:02.668544  997846 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:35:02.668937  997846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:02.668974  997846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:02.685218  997846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39441
	I0729 13:35:02.685585  997846 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:02.686052  997846 main.go:141] libmachine: Using API Version  1
	I0729 13:35:02.686076  997846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:02.686445  997846 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:02.686630  997846 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:35:02.686843  997846 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:35:02.686864  997846 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:35:02.689640  997846 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:35:02.690110  997846 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:35:02.690150  997846 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:35:02.690257  997846 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:35:02.690433  997846 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:35:02.690595  997846 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:35:02.690742  997846 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:35:02.777179  997846 ssh_runner.go:195] Run: systemctl --version
	I0729 13:35:02.783674  997846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:35:02.799199  997846 kubeconfig.go:125] found "ha-104111" server: "https://192.168.39.254:8443"
	I0729 13:35:02.799232  997846 api_server.go:166] Checking apiserver status ...
	I0729 13:35:02.799267  997846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:35:02.814748  997846 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup
	W0729 13:35:02.825232  997846 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:35:02.825300  997846 ssh_runner.go:195] Run: ls
	I0729 13:35:02.829858  997846 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 13:35:02.835474  997846 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 13:35:02.835497  997846 status.go:422] ha-104111 apiserver status = Running (err=<nil>)
	I0729 13:35:02.835508  997846 status.go:257] ha-104111 status: &{Name:ha-104111 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:35:02.835525  997846 status.go:255] checking status of ha-104111-m02 ...
	I0729 13:35:02.835850  997846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:02.835890  997846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:02.851178  997846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39815
	I0729 13:35:02.851621  997846 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:02.852099  997846 main.go:141] libmachine: Using API Version  1
	I0729 13:35:02.852121  997846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:02.852510  997846 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:02.852708  997846 main.go:141] libmachine: (ha-104111-m02) Calling .GetState
	I0729 13:35:02.854313  997846 status.go:330] ha-104111-m02 host status = "Running" (err=<nil>)
	I0729 13:35:02.854331  997846 host.go:66] Checking if "ha-104111-m02" exists ...
	I0729 13:35:02.854682  997846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:02.854723  997846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:02.869817  997846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44673
	I0729 13:35:02.870258  997846 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:02.870945  997846 main.go:141] libmachine: Using API Version  1
	I0729 13:35:02.870967  997846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:02.871397  997846 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:02.871631  997846 main.go:141] libmachine: (ha-104111-m02) Calling .GetIP
	I0729 13:35:02.874431  997846 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:35:02.874832  997846 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:35:02.874862  997846 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:35:02.875024  997846 host.go:66] Checking if "ha-104111-m02" exists ...
	I0729 13:35:02.875351  997846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:02.875386  997846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:02.891598  997846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33513
	I0729 13:35:02.892023  997846 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:02.892519  997846 main.go:141] libmachine: Using API Version  1
	I0729 13:35:02.892544  997846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:02.892908  997846 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:02.893119  997846 main.go:141] libmachine: (ha-104111-m02) Calling .DriverName
	I0729 13:35:02.893315  997846 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:35:02.893342  997846 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:35:02.896125  997846 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:35:02.896544  997846 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:35:02.896570  997846 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:35:02.896708  997846 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:35:02.896867  997846 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:35:02.897016  997846 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:35:02.897127  997846 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/id_rsa Username:docker}
	W0729 13:35:04.232672  997846 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.140:22: connect: no route to host
	I0729 13:35:04.232773  997846 retry.go:31] will retry after 196.686912ms: dial tcp 192.168.39.140:22: connect: no route to host
	W0729 13:35:07.300745  997846 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.140:22: connect: no route to host
	W0729 13:35:07.300895  997846 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	E0729 13:35:07.300925  997846 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	I0729 13:35:07.300942  997846 status.go:257] ha-104111-m02 status: &{Name:ha-104111-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 13:35:07.300968  997846 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	I0729 13:35:07.300981  997846 status.go:255] checking status of ha-104111-m03 ...
	I0729 13:35:07.301315  997846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:07.301363  997846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:07.317057  997846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34543
	I0729 13:35:07.317503  997846 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:07.317943  997846 main.go:141] libmachine: Using API Version  1
	I0729 13:35:07.317966  997846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:07.318281  997846 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:07.318470  997846 main.go:141] libmachine: (ha-104111-m03) Calling .GetState
	I0729 13:35:07.319932  997846 status.go:330] ha-104111-m03 host status = "Running" (err=<nil>)
	I0729 13:35:07.319960  997846 host.go:66] Checking if "ha-104111-m03" exists ...
	I0729 13:35:07.320402  997846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:07.320471  997846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:07.335024  997846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37491
	I0729 13:35:07.335470  997846 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:07.335982  997846 main.go:141] libmachine: Using API Version  1
	I0729 13:35:07.336009  997846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:07.336333  997846 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:07.336531  997846 main.go:141] libmachine: (ha-104111-m03) Calling .GetIP
	I0729 13:35:07.339313  997846 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:35:07.339784  997846 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:35:07.339806  997846 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:35:07.339953  997846 host.go:66] Checking if "ha-104111-m03" exists ...
	I0729 13:35:07.340295  997846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:07.340342  997846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:07.355198  997846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35523
	I0729 13:35:07.355539  997846 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:07.355986  997846 main.go:141] libmachine: Using API Version  1
	I0729 13:35:07.356012  997846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:07.356311  997846 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:07.356502  997846 main.go:141] libmachine: (ha-104111-m03) Calling .DriverName
	I0729 13:35:07.356710  997846 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:35:07.356729  997846 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:35:07.359200  997846 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:35:07.359583  997846 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:35:07.359613  997846 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:35:07.359681  997846 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:35:07.359851  997846 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:35:07.360017  997846 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:35:07.360174  997846 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/id_rsa Username:docker}
	I0729 13:35:07.440818  997846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:35:07.457852  997846 kubeconfig.go:125] found "ha-104111" server: "https://192.168.39.254:8443"
	I0729 13:35:07.457880  997846 api_server.go:166] Checking apiserver status ...
	I0729 13:35:07.457911  997846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:35:07.472842  997846 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1544/cgroup
	W0729 13:35:07.483560  997846 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1544/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:35:07.483615  997846 ssh_runner.go:195] Run: ls
	I0729 13:35:07.488069  997846 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 13:35:07.493919  997846 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 13:35:07.493948  997846 status.go:422] ha-104111-m03 apiserver status = Running (err=<nil>)
	I0729 13:35:07.493959  997846 status.go:257] ha-104111-m03 status: &{Name:ha-104111-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:35:07.493976  997846 status.go:255] checking status of ha-104111-m04 ...
	I0729 13:35:07.494359  997846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:07.494401  997846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:07.509787  997846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38197
	I0729 13:35:07.510228  997846 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:07.510704  997846 main.go:141] libmachine: Using API Version  1
	I0729 13:35:07.510723  997846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:07.511040  997846 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:07.511248  997846 main.go:141] libmachine: (ha-104111-m04) Calling .GetState
	I0729 13:35:07.512934  997846 status.go:330] ha-104111-m04 host status = "Running" (err=<nil>)
	I0729 13:35:07.512951  997846 host.go:66] Checking if "ha-104111-m04" exists ...
	I0729 13:35:07.513236  997846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:07.513270  997846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:07.528704  997846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45455
	I0729 13:35:07.529124  997846 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:07.529604  997846 main.go:141] libmachine: Using API Version  1
	I0729 13:35:07.529631  997846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:07.529912  997846 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:07.530094  997846 main.go:141] libmachine: (ha-104111-m04) Calling .GetIP
	I0729 13:35:07.532857  997846 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:35:07.533279  997846 main.go:141] libmachine: (ha-104111-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:31:bf", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:31:34 +0000 UTC Type:0 Mac:52:54:00:c2:31:bf Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-104111-m04 Clientid:01:52:54:00:c2:31:bf}
	I0729 13:35:07.533311  997846 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:35:07.533457  997846 host.go:66] Checking if "ha-104111-m04" exists ...
	I0729 13:35:07.533801  997846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:07.533845  997846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:07.548757  997846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36545
	I0729 13:35:07.549198  997846 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:07.549665  997846 main.go:141] libmachine: Using API Version  1
	I0729 13:35:07.549686  997846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:07.549952  997846 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:07.550117  997846 main.go:141] libmachine: (ha-104111-m04) Calling .DriverName
	I0729 13:35:07.550292  997846 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:35:07.550316  997846 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHHostname
	I0729 13:35:07.552735  997846 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:35:07.553169  997846 main.go:141] libmachine: (ha-104111-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:31:bf", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:31:34 +0000 UTC Type:0 Mac:52:54:00:c2:31:bf Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-104111-m04 Clientid:01:52:54:00:c2:31:bf}
	I0729 13:35:07.553204  997846 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:35:07.553298  997846 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHPort
	I0729 13:35:07.553479  997846 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHKeyPath
	I0729 13:35:07.553622  997846 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHUsername
	I0729 13:35:07.553764  997846 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m04/id_rsa Username:docker}
	I0729 13:35:07.636939  997846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:35:07.655001  997846 status.go:257] ha-104111-m04 status: &{Name:ha-104111-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-104111 status -v=7 --alsologtostderr: exit status 3 (3.721334731s)

                                                
                                                
-- stdout --
	ha-104111
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-104111-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-104111-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-104111-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:35:10.402411  997945 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:35:10.402542  997945 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:35:10.402558  997945 out.go:304] Setting ErrFile to fd 2...
	I0729 13:35:10.402565  997945 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:35:10.402898  997945 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 13:35:10.403073  997945 out.go:298] Setting JSON to false
	I0729 13:35:10.403100  997945 mustload.go:65] Loading cluster: ha-104111
	I0729 13:35:10.403219  997945 notify.go:220] Checking for updates...
	I0729 13:35:10.403458  997945 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:35:10.403476  997945 status.go:255] checking status of ha-104111 ...
	I0729 13:35:10.403861  997945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:10.403918  997945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:10.419820  997945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46841
	I0729 13:35:10.420283  997945 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:10.420929  997945 main.go:141] libmachine: Using API Version  1
	I0729 13:35:10.420950  997945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:10.421312  997945 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:10.421558  997945 main.go:141] libmachine: (ha-104111) Calling .GetState
	I0729 13:35:10.423282  997945 status.go:330] ha-104111 host status = "Running" (err=<nil>)
	I0729 13:35:10.423301  997945 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:35:10.423728  997945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:10.423779  997945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:10.439303  997945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43305
	I0729 13:35:10.439752  997945 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:10.440220  997945 main.go:141] libmachine: Using API Version  1
	I0729 13:35:10.440252  997945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:10.440588  997945 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:10.440772  997945 main.go:141] libmachine: (ha-104111) Calling .GetIP
	I0729 13:35:10.443715  997945 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:35:10.444124  997945 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:35:10.444150  997945 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:35:10.444273  997945 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:35:10.444609  997945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:10.444645  997945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:10.459814  997945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46691
	I0729 13:35:10.460253  997945 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:10.460804  997945 main.go:141] libmachine: Using API Version  1
	I0729 13:35:10.460831  997945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:10.461129  997945 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:10.461313  997945 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:35:10.461525  997945 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:35:10.461562  997945 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:35:10.464027  997945 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:35:10.464472  997945 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:35:10.464496  997945 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:35:10.464643  997945 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:35:10.464803  997945 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:35:10.464976  997945 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:35:10.465127  997945 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:35:10.553283  997945 ssh_runner.go:195] Run: systemctl --version
	I0729 13:35:10.559538  997945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:35:10.575974  997945 kubeconfig.go:125] found "ha-104111" server: "https://192.168.39.254:8443"
	I0729 13:35:10.576010  997945 api_server.go:166] Checking apiserver status ...
	I0729 13:35:10.576053  997945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:35:10.594649  997945 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup
	W0729 13:35:10.604385  997945 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:35:10.604636  997945 ssh_runner.go:195] Run: ls
	I0729 13:35:10.610297  997945 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 13:35:10.615640  997945 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 13:35:10.615669  997945 status.go:422] ha-104111 apiserver status = Running (err=<nil>)
	I0729 13:35:10.615682  997945 status.go:257] ha-104111 status: &{Name:ha-104111 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:35:10.615704  997945 status.go:255] checking status of ha-104111-m02 ...
	I0729 13:35:10.616066  997945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:10.616108  997945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:10.631731  997945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38619
	I0729 13:35:10.632208  997945 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:10.632774  997945 main.go:141] libmachine: Using API Version  1
	I0729 13:35:10.632795  997945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:10.633114  997945 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:10.633325  997945 main.go:141] libmachine: (ha-104111-m02) Calling .GetState
	I0729 13:35:10.634761  997945 status.go:330] ha-104111-m02 host status = "Running" (err=<nil>)
	I0729 13:35:10.634777  997945 host.go:66] Checking if "ha-104111-m02" exists ...
	I0729 13:35:10.635064  997945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:10.635096  997945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:10.651255  997945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43201
	I0729 13:35:10.651659  997945 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:10.652066  997945 main.go:141] libmachine: Using API Version  1
	I0729 13:35:10.652095  997945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:10.652418  997945 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:10.652645  997945 main.go:141] libmachine: (ha-104111-m02) Calling .GetIP
	I0729 13:35:10.655271  997945 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:35:10.655706  997945 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:35:10.655739  997945 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:35:10.655879  997945 host.go:66] Checking if "ha-104111-m02" exists ...
	I0729 13:35:10.656276  997945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:10.656358  997945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:10.672568  997945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
	I0729 13:35:10.673142  997945 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:10.673647  997945 main.go:141] libmachine: Using API Version  1
	I0729 13:35:10.673675  997945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:10.673992  997945 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:10.674187  997945 main.go:141] libmachine: (ha-104111-m02) Calling .DriverName
	I0729 13:35:10.674357  997945 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:35:10.674384  997945 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:35:10.677540  997945 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:35:10.678059  997945 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:35:10.678096  997945 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:35:10.678249  997945 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:35:10.678447  997945 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:35:10.678603  997945 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:35:10.678753  997945 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/id_rsa Username:docker}
	W0729 13:35:13.732679  997945 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.140:22: connect: no route to host
	W0729 13:35:13.732773  997945 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	E0729 13:35:13.732790  997945 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	I0729 13:35:13.732799  997945 status.go:257] ha-104111-m02 status: &{Name:ha-104111-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 13:35:13.732821  997945 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	I0729 13:35:13.732845  997945 status.go:255] checking status of ha-104111-m03 ...
	I0729 13:35:13.733182  997945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:13.733239  997945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:13.749558  997945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42901
	I0729 13:35:13.750028  997945 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:13.750546  997945 main.go:141] libmachine: Using API Version  1
	I0729 13:35:13.750572  997945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:13.750919  997945 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:13.751127  997945 main.go:141] libmachine: (ha-104111-m03) Calling .GetState
	I0729 13:35:13.752772  997945 status.go:330] ha-104111-m03 host status = "Running" (err=<nil>)
	I0729 13:35:13.752794  997945 host.go:66] Checking if "ha-104111-m03" exists ...
	I0729 13:35:13.753110  997945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:13.753150  997945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:13.768560  997945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38279
	I0729 13:35:13.768985  997945 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:13.769416  997945 main.go:141] libmachine: Using API Version  1
	I0729 13:35:13.769436  997945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:13.769736  997945 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:13.769922  997945 main.go:141] libmachine: (ha-104111-m03) Calling .GetIP
	I0729 13:35:13.772719  997945 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:35:13.773077  997945 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:35:13.773106  997945 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:35:13.773243  997945 host.go:66] Checking if "ha-104111-m03" exists ...
	I0729 13:35:13.773545  997945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:13.773581  997945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:13.788085  997945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43239
	I0729 13:35:13.788439  997945 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:13.788873  997945 main.go:141] libmachine: Using API Version  1
	I0729 13:35:13.788892  997945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:13.789213  997945 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:13.789444  997945 main.go:141] libmachine: (ha-104111-m03) Calling .DriverName
	I0729 13:35:13.789633  997945 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:35:13.789654  997945 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:35:13.791935  997945 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:35:13.792325  997945 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:35:13.792362  997945 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:35:13.792511  997945 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:35:13.792676  997945 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:35:13.792825  997945 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:35:13.792964  997945 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/id_rsa Username:docker}
	I0729 13:35:13.875561  997945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:35:13.890571  997945 kubeconfig.go:125] found "ha-104111" server: "https://192.168.39.254:8443"
	I0729 13:35:13.890600  997945 api_server.go:166] Checking apiserver status ...
	I0729 13:35:13.890630  997945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:35:13.904417  997945 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1544/cgroup
	W0729 13:35:13.914160  997945 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1544/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:35:13.914220  997945 ssh_runner.go:195] Run: ls
	I0729 13:35:13.918616  997945 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 13:35:13.922705  997945 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 13:35:13.922732  997945 status.go:422] ha-104111-m03 apiserver status = Running (err=<nil>)
	I0729 13:35:13.922743  997945 status.go:257] ha-104111-m03 status: &{Name:ha-104111-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:35:13.922775  997945 status.go:255] checking status of ha-104111-m04 ...
	I0729 13:35:13.923206  997945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:13.923254  997945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:13.938459  997945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34915
	I0729 13:35:13.938887  997945 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:13.939387  997945 main.go:141] libmachine: Using API Version  1
	I0729 13:35:13.939409  997945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:13.939787  997945 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:13.939989  997945 main.go:141] libmachine: (ha-104111-m04) Calling .GetState
	I0729 13:35:13.941596  997945 status.go:330] ha-104111-m04 host status = "Running" (err=<nil>)
	I0729 13:35:13.941627  997945 host.go:66] Checking if "ha-104111-m04" exists ...
	I0729 13:35:13.942017  997945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:13.942064  997945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:13.957919  997945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39375
	I0729 13:35:13.958357  997945 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:13.958826  997945 main.go:141] libmachine: Using API Version  1
	I0729 13:35:13.958849  997945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:13.959171  997945 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:13.959377  997945 main.go:141] libmachine: (ha-104111-m04) Calling .GetIP
	I0729 13:35:13.962033  997945 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:35:13.962482  997945 main.go:141] libmachine: (ha-104111-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:31:bf", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:31:34 +0000 UTC Type:0 Mac:52:54:00:c2:31:bf Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-104111-m04 Clientid:01:52:54:00:c2:31:bf}
	I0729 13:35:13.962527  997945 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:35:13.962685  997945 host.go:66] Checking if "ha-104111-m04" exists ...
	I0729 13:35:13.963091  997945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:13.963138  997945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:13.978247  997945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41691
	I0729 13:35:13.978683  997945 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:13.979206  997945 main.go:141] libmachine: Using API Version  1
	I0729 13:35:13.979228  997945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:13.979586  997945 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:13.979764  997945 main.go:141] libmachine: (ha-104111-m04) Calling .DriverName
	I0729 13:35:13.979931  997945 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:35:13.979950  997945 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHHostname
	I0729 13:35:13.982749  997945 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:35:13.983256  997945 main.go:141] libmachine: (ha-104111-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:31:bf", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:31:34 +0000 UTC Type:0 Mac:52:54:00:c2:31:bf Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-104111-m04 Clientid:01:52:54:00:c2:31:bf}
	I0729 13:35:13.983287  997945 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:35:13.983437  997945 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHPort
	I0729 13:35:13.983600  997945 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHKeyPath
	I0729 13:35:13.983767  997945 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHUsername
	I0729 13:35:13.983926  997945 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m04/id_rsa Username:docker}
	I0729 13:35:14.063986  997945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:35:14.077731  997945 status.go:257] ha-104111-m04 status: &{Name:ha-104111-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-104111 status -v=7 --alsologtostderr: exit status 3 (4.262741493s)

                                                
                                                
-- stdout --
	ha-104111
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-104111-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-104111-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-104111-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:35:16.165114  998061 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:35:16.165264  998061 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:35:16.165273  998061 out.go:304] Setting ErrFile to fd 2...
	I0729 13:35:16.165278  998061 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:35:16.165460  998061 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 13:35:16.165643  998061 out.go:298] Setting JSON to false
	I0729 13:35:16.165679  998061 mustload.go:65] Loading cluster: ha-104111
	I0729 13:35:16.165796  998061 notify.go:220] Checking for updates...
	I0729 13:35:16.166195  998061 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:35:16.166219  998061 status.go:255] checking status of ha-104111 ...
	I0729 13:35:16.166763  998061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:16.166854  998061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:16.185935  998061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I0729 13:35:16.186472  998061 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:16.187103  998061 main.go:141] libmachine: Using API Version  1
	I0729 13:35:16.187121  998061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:16.187472  998061 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:16.187685  998061 main.go:141] libmachine: (ha-104111) Calling .GetState
	I0729 13:35:16.189460  998061 status.go:330] ha-104111 host status = "Running" (err=<nil>)
	I0729 13:35:16.189480  998061 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:35:16.189767  998061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:16.189815  998061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:16.205857  998061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36653
	I0729 13:35:16.206281  998061 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:16.206778  998061 main.go:141] libmachine: Using API Version  1
	I0729 13:35:16.206803  998061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:16.207132  998061 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:16.207346  998061 main.go:141] libmachine: (ha-104111) Calling .GetIP
	I0729 13:35:16.210181  998061 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:35:16.210680  998061 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:35:16.210715  998061 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:35:16.210881  998061 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:35:16.211296  998061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:16.211348  998061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:16.226484  998061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42363
	I0729 13:35:16.226962  998061 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:16.227456  998061 main.go:141] libmachine: Using API Version  1
	I0729 13:35:16.227479  998061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:16.227928  998061 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:16.228135  998061 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:35:16.228340  998061 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:35:16.228384  998061 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:35:16.231466  998061 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:35:16.232009  998061 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:35:16.232028  998061 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:35:16.232224  998061 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:35:16.232385  998061 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:35:16.232562  998061 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:35:16.232685  998061 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:35:16.315776  998061 ssh_runner.go:195] Run: systemctl --version
	I0729 13:35:16.321942  998061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:35:16.336912  998061 kubeconfig.go:125] found "ha-104111" server: "https://192.168.39.254:8443"
	I0729 13:35:16.336939  998061 api_server.go:166] Checking apiserver status ...
	I0729 13:35:16.336971  998061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:35:16.352303  998061 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup
	W0729 13:35:16.362739  998061 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:35:16.362801  998061 ssh_runner.go:195] Run: ls
	I0729 13:35:16.367584  998061 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 13:35:16.373752  998061 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 13:35:16.373775  998061 status.go:422] ha-104111 apiserver status = Running (err=<nil>)
	I0729 13:35:16.373785  998061 status.go:257] ha-104111 status: &{Name:ha-104111 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:35:16.373801  998061 status.go:255] checking status of ha-104111-m02 ...
	I0729 13:35:16.374125  998061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:16.374172  998061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:16.390119  998061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39449
	I0729 13:35:16.390613  998061 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:16.391087  998061 main.go:141] libmachine: Using API Version  1
	I0729 13:35:16.391119  998061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:16.391552  998061 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:16.391750  998061 main.go:141] libmachine: (ha-104111-m02) Calling .GetState
	I0729 13:35:16.393609  998061 status.go:330] ha-104111-m02 host status = "Running" (err=<nil>)
	I0729 13:35:16.393628  998061 host.go:66] Checking if "ha-104111-m02" exists ...
	I0729 13:35:16.393915  998061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:16.393948  998061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:16.408706  998061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36823
	I0729 13:35:16.409163  998061 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:16.409636  998061 main.go:141] libmachine: Using API Version  1
	I0729 13:35:16.409656  998061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:16.409945  998061 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:16.410129  998061 main.go:141] libmachine: (ha-104111-m02) Calling .GetIP
	I0729 13:35:16.412727  998061 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:35:16.413156  998061 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:35:16.413185  998061 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:35:16.413347  998061 host.go:66] Checking if "ha-104111-m02" exists ...
	I0729 13:35:16.413736  998061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:16.413782  998061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:16.428790  998061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39901
	I0729 13:35:16.429192  998061 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:16.429667  998061 main.go:141] libmachine: Using API Version  1
	I0729 13:35:16.429685  998061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:16.429996  998061 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:16.430183  998061 main.go:141] libmachine: (ha-104111-m02) Calling .DriverName
	I0729 13:35:16.430354  998061 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:35:16.430374  998061 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:35:16.432950  998061 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:35:16.433389  998061 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:35:16.433418  998061 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:35:16.433586  998061 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:35:16.433767  998061 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:35:16.433930  998061 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:35:16.434058  998061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/id_rsa Username:docker}
	W0729 13:35:16.808616  998061 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.140:22: connect: no route to host
	I0729 13:35:16.808666  998061 retry.go:31] will retry after 177.355995ms: dial tcp 192.168.39.140:22: connect: no route to host
	W0729 13:35:20.036683  998061 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.140:22: connect: no route to host
	W0729 13:35:20.036775  998061 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	E0729 13:35:20.036808  998061 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	I0729 13:35:20.036817  998061 status.go:257] ha-104111-m02 status: &{Name:ha-104111-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 13:35:20.036845  998061 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	I0729 13:35:20.036852  998061 status.go:255] checking status of ha-104111-m03 ...
	I0729 13:35:20.037142  998061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:20.037184  998061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:20.052151  998061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42111
	I0729 13:35:20.052679  998061 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:20.053210  998061 main.go:141] libmachine: Using API Version  1
	I0729 13:35:20.053232  998061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:20.053549  998061 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:20.053766  998061 main.go:141] libmachine: (ha-104111-m03) Calling .GetState
	I0729 13:35:20.055149  998061 status.go:330] ha-104111-m03 host status = "Running" (err=<nil>)
	I0729 13:35:20.055164  998061 host.go:66] Checking if "ha-104111-m03" exists ...
	I0729 13:35:20.055555  998061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:20.055606  998061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:20.069843  998061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41753
	I0729 13:35:20.070208  998061 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:20.070686  998061 main.go:141] libmachine: Using API Version  1
	I0729 13:35:20.070707  998061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:20.070990  998061 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:20.071187  998061 main.go:141] libmachine: (ha-104111-m03) Calling .GetIP
	I0729 13:35:20.073781  998061 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:35:20.074183  998061 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:35:20.074201  998061 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:35:20.074396  998061 host.go:66] Checking if "ha-104111-m03" exists ...
	I0729 13:35:20.074723  998061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:20.074760  998061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:20.089202  998061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38673
	I0729 13:35:20.089621  998061 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:20.090044  998061 main.go:141] libmachine: Using API Version  1
	I0729 13:35:20.090075  998061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:20.090378  998061 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:20.090566  998061 main.go:141] libmachine: (ha-104111-m03) Calling .DriverName
	I0729 13:35:20.090781  998061 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:35:20.090806  998061 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:35:20.093511  998061 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:35:20.093929  998061 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:35:20.093965  998061 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:35:20.094082  998061 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:35:20.094237  998061 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:35:20.094401  998061 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:35:20.094537  998061 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/id_rsa Username:docker}
	I0729 13:35:20.175797  998061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:35:20.190847  998061 kubeconfig.go:125] found "ha-104111" server: "https://192.168.39.254:8443"
	I0729 13:35:20.190876  998061 api_server.go:166] Checking apiserver status ...
	I0729 13:35:20.190911  998061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:35:20.204345  998061 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1544/cgroup
	W0729 13:35:20.215703  998061 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1544/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:35:20.215762  998061 ssh_runner.go:195] Run: ls
	I0729 13:35:20.219827  998061 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 13:35:20.224139  998061 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 13:35:20.224162  998061 status.go:422] ha-104111-m03 apiserver status = Running (err=<nil>)
	I0729 13:35:20.224170  998061 status.go:257] ha-104111-m03 status: &{Name:ha-104111-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:35:20.224185  998061 status.go:255] checking status of ha-104111-m04 ...
	I0729 13:35:20.224607  998061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:20.224666  998061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:20.240454  998061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38375
	I0729 13:35:20.240887  998061 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:20.241420  998061 main.go:141] libmachine: Using API Version  1
	I0729 13:35:20.241444  998061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:20.241754  998061 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:20.241969  998061 main.go:141] libmachine: (ha-104111-m04) Calling .GetState
	I0729 13:35:20.243277  998061 status.go:330] ha-104111-m04 host status = "Running" (err=<nil>)
	I0729 13:35:20.243294  998061 host.go:66] Checking if "ha-104111-m04" exists ...
	I0729 13:35:20.243649  998061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:20.243713  998061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:20.259278  998061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37267
	I0729 13:35:20.259732  998061 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:20.260271  998061 main.go:141] libmachine: Using API Version  1
	I0729 13:35:20.260299  998061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:20.260639  998061 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:20.260833  998061 main.go:141] libmachine: (ha-104111-m04) Calling .GetIP
	I0729 13:35:20.263401  998061 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:35:20.263822  998061 main.go:141] libmachine: (ha-104111-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:31:bf", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:31:34 +0000 UTC Type:0 Mac:52:54:00:c2:31:bf Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-104111-m04 Clientid:01:52:54:00:c2:31:bf}
	I0729 13:35:20.263852  998061 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:35:20.263982  998061 host.go:66] Checking if "ha-104111-m04" exists ...
	I0729 13:35:20.264291  998061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:20.264329  998061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:20.279017  998061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46809
	I0729 13:35:20.279431  998061 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:20.279882  998061 main.go:141] libmachine: Using API Version  1
	I0729 13:35:20.279906  998061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:20.280231  998061 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:20.280429  998061 main.go:141] libmachine: (ha-104111-m04) Calling .DriverName
	I0729 13:35:20.280632  998061 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:35:20.280652  998061 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHHostname
	I0729 13:35:20.283405  998061 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:35:20.283814  998061 main.go:141] libmachine: (ha-104111-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:31:bf", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:31:34 +0000 UTC Type:0 Mac:52:54:00:c2:31:bf Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-104111-m04 Clientid:01:52:54:00:c2:31:bf}
	I0729 13:35:20.283845  998061 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:35:20.283953  998061 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHPort
	I0729 13:35:20.284133  998061 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHKeyPath
	I0729 13:35:20.284273  998061 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHUsername
	I0729 13:35:20.284434  998061 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m04/id_rsa Username:docker}
	I0729 13:35:20.363730  998061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:35:20.378254  998061 status.go:257] ha-104111-m04 status: &{Name:ha-104111-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-104111 status -v=7 --alsologtostderr: exit status 3 (3.713248538s)

                                                
                                                
-- stdout --
	ha-104111
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-104111-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-104111-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-104111-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:35:24.888321  998161 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:35:24.888482  998161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:35:24.888492  998161 out.go:304] Setting ErrFile to fd 2...
	I0729 13:35:24.888499  998161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:35:24.888669  998161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 13:35:24.888856  998161 out.go:298] Setting JSON to false
	I0729 13:35:24.888888  998161 mustload.go:65] Loading cluster: ha-104111
	I0729 13:35:24.888991  998161 notify.go:220] Checking for updates...
	I0729 13:35:24.889284  998161 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:35:24.889302  998161 status.go:255] checking status of ha-104111 ...
	I0729 13:35:24.889683  998161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:24.889749  998161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:24.905472  998161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42591
	I0729 13:35:24.905890  998161 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:24.906584  998161 main.go:141] libmachine: Using API Version  1
	I0729 13:35:24.906618  998161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:24.906969  998161 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:24.907189  998161 main.go:141] libmachine: (ha-104111) Calling .GetState
	I0729 13:35:24.908750  998161 status.go:330] ha-104111 host status = "Running" (err=<nil>)
	I0729 13:35:24.908766  998161 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:35:24.909091  998161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:24.909137  998161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:24.923711  998161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38729
	I0729 13:35:24.924133  998161 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:24.924655  998161 main.go:141] libmachine: Using API Version  1
	I0729 13:35:24.924680  998161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:24.925003  998161 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:24.925160  998161 main.go:141] libmachine: (ha-104111) Calling .GetIP
	I0729 13:35:24.927963  998161 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:35:24.928322  998161 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:35:24.928342  998161 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:35:24.928519  998161 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:35:24.928885  998161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:24.928939  998161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:24.944517  998161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38161
	I0729 13:35:24.944900  998161 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:24.945314  998161 main.go:141] libmachine: Using API Version  1
	I0729 13:35:24.945333  998161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:24.945616  998161 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:24.945794  998161 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:35:24.945967  998161 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:35:24.945992  998161 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:35:24.948444  998161 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:35:24.948850  998161 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:35:24.948878  998161 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:35:24.948992  998161 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:35:24.949192  998161 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:35:24.949351  998161 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:35:24.949464  998161 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:35:25.031984  998161 ssh_runner.go:195] Run: systemctl --version
	I0729 13:35:25.037845  998161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:35:25.053337  998161 kubeconfig.go:125] found "ha-104111" server: "https://192.168.39.254:8443"
	I0729 13:35:25.053391  998161 api_server.go:166] Checking apiserver status ...
	I0729 13:35:25.053437  998161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:35:25.068274  998161 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup
	W0729 13:35:25.078471  998161 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:35:25.078539  998161 ssh_runner.go:195] Run: ls
	I0729 13:35:25.083337  998161 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 13:35:25.087838  998161 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 13:35:25.087859  998161 status.go:422] ha-104111 apiserver status = Running (err=<nil>)
	I0729 13:35:25.087870  998161 status.go:257] ha-104111 status: &{Name:ha-104111 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:35:25.087886  998161 status.go:255] checking status of ha-104111-m02 ...
	I0729 13:35:25.088198  998161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:25.088238  998161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:25.103633  998161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36173
	I0729 13:35:25.104102  998161 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:25.104597  998161 main.go:141] libmachine: Using API Version  1
	I0729 13:35:25.104625  998161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:25.104984  998161 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:25.105176  998161 main.go:141] libmachine: (ha-104111-m02) Calling .GetState
	I0729 13:35:25.106792  998161 status.go:330] ha-104111-m02 host status = "Running" (err=<nil>)
	I0729 13:35:25.106812  998161 host.go:66] Checking if "ha-104111-m02" exists ...
	I0729 13:35:25.107100  998161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:25.107134  998161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:25.121927  998161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34817
	I0729 13:35:25.122282  998161 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:25.122732  998161 main.go:141] libmachine: Using API Version  1
	I0729 13:35:25.122758  998161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:25.123080  998161 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:25.123246  998161 main.go:141] libmachine: (ha-104111-m02) Calling .GetIP
	I0729 13:35:25.125697  998161 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:35:25.126009  998161 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:35:25.126034  998161 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:35:25.126219  998161 host.go:66] Checking if "ha-104111-m02" exists ...
	I0729 13:35:25.126573  998161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:25.126609  998161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:25.141517  998161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42369
	I0729 13:35:25.141954  998161 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:25.142486  998161 main.go:141] libmachine: Using API Version  1
	I0729 13:35:25.142514  998161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:25.142812  998161 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:25.143011  998161 main.go:141] libmachine: (ha-104111-m02) Calling .DriverName
	I0729 13:35:25.143205  998161 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:35:25.143234  998161 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:35:25.145765  998161 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:35:25.146219  998161 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:35:25.146246  998161 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:35:25.146372  998161 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:35:25.146547  998161 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:35:25.146695  998161 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:35:25.146896  998161 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/id_rsa Username:docker}
	W0729 13:35:28.200650  998161 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.140:22: connect: no route to host
	W0729 13:35:28.200749  998161 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	E0729 13:35:28.200767  998161 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	I0729 13:35:28.200774  998161 status.go:257] ha-104111-m02 status: &{Name:ha-104111-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 13:35:28.200793  998161 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	I0729 13:35:28.200803  998161 status.go:255] checking status of ha-104111-m03 ...
	I0729 13:35:28.201109  998161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:28.201153  998161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:28.217568  998161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34051
	I0729 13:35:28.218049  998161 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:28.218680  998161 main.go:141] libmachine: Using API Version  1
	I0729 13:35:28.218711  998161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:28.219084  998161 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:28.219308  998161 main.go:141] libmachine: (ha-104111-m03) Calling .GetState
	I0729 13:35:28.221123  998161 status.go:330] ha-104111-m03 host status = "Running" (err=<nil>)
	I0729 13:35:28.221145  998161 host.go:66] Checking if "ha-104111-m03" exists ...
	I0729 13:35:28.221557  998161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:28.221604  998161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:28.236670  998161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37675
	I0729 13:35:28.237113  998161 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:28.237627  998161 main.go:141] libmachine: Using API Version  1
	I0729 13:35:28.237653  998161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:28.238063  998161 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:28.238257  998161 main.go:141] libmachine: (ha-104111-m03) Calling .GetIP
	I0729 13:35:28.240810  998161 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:35:28.241230  998161 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:35:28.241262  998161 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:35:28.241436  998161 host.go:66] Checking if "ha-104111-m03" exists ...
	I0729 13:35:28.241805  998161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:28.241854  998161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:28.256256  998161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34073
	I0729 13:35:28.256656  998161 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:28.257113  998161 main.go:141] libmachine: Using API Version  1
	I0729 13:35:28.257133  998161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:28.257480  998161 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:28.257661  998161 main.go:141] libmachine: (ha-104111-m03) Calling .DriverName
	I0729 13:35:28.257878  998161 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:35:28.257901  998161 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:35:28.260285  998161 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:35:28.260897  998161 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:35:28.260931  998161 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:35:28.261073  998161 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:35:28.261221  998161 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:35:28.261359  998161 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:35:28.261481  998161 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/id_rsa Username:docker}
	I0729 13:35:28.343729  998161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:35:28.362253  998161 kubeconfig.go:125] found "ha-104111" server: "https://192.168.39.254:8443"
	I0729 13:35:28.362289  998161 api_server.go:166] Checking apiserver status ...
	I0729 13:35:28.362324  998161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:35:28.377129  998161 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1544/cgroup
	W0729 13:35:28.388169  998161 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1544/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:35:28.388223  998161 ssh_runner.go:195] Run: ls
	I0729 13:35:28.392889  998161 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 13:35:28.398747  998161 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 13:35:28.398776  998161 status.go:422] ha-104111-m03 apiserver status = Running (err=<nil>)
	I0729 13:35:28.398786  998161 status.go:257] ha-104111-m03 status: &{Name:ha-104111-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:35:28.398805  998161 status.go:255] checking status of ha-104111-m04 ...
	I0729 13:35:28.399131  998161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:28.399172  998161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:28.414404  998161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34231
	I0729 13:35:28.414877  998161 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:28.415404  998161 main.go:141] libmachine: Using API Version  1
	I0729 13:35:28.415428  998161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:28.415777  998161 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:28.415982  998161 main.go:141] libmachine: (ha-104111-m04) Calling .GetState
	I0729 13:35:28.417714  998161 status.go:330] ha-104111-m04 host status = "Running" (err=<nil>)
	I0729 13:35:28.417732  998161 host.go:66] Checking if "ha-104111-m04" exists ...
	I0729 13:35:28.418038  998161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:28.418077  998161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:28.432764  998161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38103
	I0729 13:35:28.433119  998161 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:28.433576  998161 main.go:141] libmachine: Using API Version  1
	I0729 13:35:28.433598  998161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:28.433919  998161 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:28.434088  998161 main.go:141] libmachine: (ha-104111-m04) Calling .GetIP
	I0729 13:35:28.437042  998161 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:35:28.437568  998161 main.go:141] libmachine: (ha-104111-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:31:bf", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:31:34 +0000 UTC Type:0 Mac:52:54:00:c2:31:bf Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-104111-m04 Clientid:01:52:54:00:c2:31:bf}
	I0729 13:35:28.437600  998161 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:35:28.437759  998161 host.go:66] Checking if "ha-104111-m04" exists ...
	I0729 13:35:28.438051  998161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:28.438088  998161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:28.452643  998161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39353
	I0729 13:35:28.453018  998161 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:28.453474  998161 main.go:141] libmachine: Using API Version  1
	I0729 13:35:28.453499  998161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:28.453857  998161 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:28.454054  998161 main.go:141] libmachine: (ha-104111-m04) Calling .DriverName
	I0729 13:35:28.454269  998161 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:35:28.454321  998161 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHHostname
	I0729 13:35:28.457003  998161 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:35:28.457410  998161 main.go:141] libmachine: (ha-104111-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:31:bf", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:31:34 +0000 UTC Type:0 Mac:52:54:00:c2:31:bf Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-104111-m04 Clientid:01:52:54:00:c2:31:bf}
	I0729 13:35:28.457438  998161 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:35:28.457584  998161 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHPort
	I0729 13:35:28.457732  998161 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHKeyPath
	I0729 13:35:28.457864  998161 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHUsername
	I0729 13:35:28.457995  998161 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m04/id_rsa Username:docker}
	I0729 13:35:28.540039  998161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:35:28.554160  998161 status.go:257] ha-104111-m04 status: &{Name:ha-104111-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-104111 status -v=7 --alsologtostderr: exit status 7 (616.935547ms)

                                                
                                                
-- stdout --
	ha-104111
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-104111-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-104111-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-104111-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:35:39.474025  998310 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:35:39.474316  998310 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:35:39.474327  998310 out.go:304] Setting ErrFile to fd 2...
	I0729 13:35:39.474331  998310 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:35:39.474645  998310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 13:35:39.474905  998310 out.go:298] Setting JSON to false
	I0729 13:35:39.474947  998310 mustload.go:65] Loading cluster: ha-104111
	I0729 13:35:39.475085  998310 notify.go:220] Checking for updates...
	I0729 13:35:39.475490  998310 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:35:39.475511  998310 status.go:255] checking status of ha-104111 ...
	I0729 13:35:39.476101  998310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:39.476185  998310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:39.492437  998310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34207
	I0729 13:35:39.492889  998310 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:39.493405  998310 main.go:141] libmachine: Using API Version  1
	I0729 13:35:39.493433  998310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:39.493801  998310 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:39.494042  998310 main.go:141] libmachine: (ha-104111) Calling .GetState
	I0729 13:35:39.495476  998310 status.go:330] ha-104111 host status = "Running" (err=<nil>)
	I0729 13:35:39.495495  998310 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:35:39.495779  998310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:39.495818  998310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:39.511333  998310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41017
	I0729 13:35:39.511754  998310 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:39.512239  998310 main.go:141] libmachine: Using API Version  1
	I0729 13:35:39.512260  998310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:39.512657  998310 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:39.512841  998310 main.go:141] libmachine: (ha-104111) Calling .GetIP
	I0729 13:35:39.515277  998310 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:35:39.515769  998310 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:35:39.515805  998310 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:35:39.515970  998310 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:35:39.516257  998310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:39.516292  998310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:39.531685  998310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36853
	I0729 13:35:39.532056  998310 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:39.532526  998310 main.go:141] libmachine: Using API Version  1
	I0729 13:35:39.532547  998310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:39.532880  998310 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:39.533065  998310 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:35:39.533266  998310 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:35:39.533291  998310 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:35:39.536112  998310 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:35:39.536551  998310 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:35:39.536581  998310 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:35:39.536725  998310 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:35:39.536901  998310 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:35:39.537070  998310 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:35:39.537212  998310 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:35:39.620956  998310 ssh_runner.go:195] Run: systemctl --version
	I0729 13:35:39.627437  998310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:35:39.643093  998310 kubeconfig.go:125] found "ha-104111" server: "https://192.168.39.254:8443"
	I0729 13:35:39.643130  998310 api_server.go:166] Checking apiserver status ...
	I0729 13:35:39.643176  998310 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:35:39.660520  998310 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup
	W0729 13:35:39.670553  998310 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:35:39.670631  998310 ssh_runner.go:195] Run: ls
	I0729 13:35:39.675258  998310 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 13:35:39.679432  998310 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 13:35:39.679458  998310 status.go:422] ha-104111 apiserver status = Running (err=<nil>)
	I0729 13:35:39.679469  998310 status.go:257] ha-104111 status: &{Name:ha-104111 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:35:39.679487  998310 status.go:255] checking status of ha-104111-m02 ...
	I0729 13:35:39.679787  998310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:39.679822  998310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:39.695119  998310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44153
	I0729 13:35:39.695667  998310 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:39.696178  998310 main.go:141] libmachine: Using API Version  1
	I0729 13:35:39.696199  998310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:39.696566  998310 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:39.696751  998310 main.go:141] libmachine: (ha-104111-m02) Calling .GetState
	I0729 13:35:39.698390  998310 status.go:330] ha-104111-m02 host status = "Stopped" (err=<nil>)
	I0729 13:35:39.698413  998310 status.go:343] host is not running, skipping remaining checks
	I0729 13:35:39.698420  998310 status.go:257] ha-104111-m02 status: &{Name:ha-104111-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:35:39.698436  998310 status.go:255] checking status of ha-104111-m03 ...
	I0729 13:35:39.698717  998310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:39.698766  998310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:39.714451  998310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40843
	I0729 13:35:39.714836  998310 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:39.715292  998310 main.go:141] libmachine: Using API Version  1
	I0729 13:35:39.715317  998310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:39.715645  998310 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:39.715859  998310 main.go:141] libmachine: (ha-104111-m03) Calling .GetState
	I0729 13:35:39.717447  998310 status.go:330] ha-104111-m03 host status = "Running" (err=<nil>)
	I0729 13:35:39.717470  998310 host.go:66] Checking if "ha-104111-m03" exists ...
	I0729 13:35:39.717899  998310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:39.717937  998310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:39.732689  998310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46727
	I0729 13:35:39.733098  998310 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:39.733578  998310 main.go:141] libmachine: Using API Version  1
	I0729 13:35:39.733602  998310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:39.733937  998310 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:39.734112  998310 main.go:141] libmachine: (ha-104111-m03) Calling .GetIP
	I0729 13:35:39.737101  998310 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:35:39.737583  998310 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:35:39.737623  998310 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:35:39.737756  998310 host.go:66] Checking if "ha-104111-m03" exists ...
	I0729 13:35:39.738077  998310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:39.738125  998310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:39.754600  998310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35545
	I0729 13:35:39.755062  998310 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:39.755676  998310 main.go:141] libmachine: Using API Version  1
	I0729 13:35:39.755698  998310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:39.755980  998310 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:39.756168  998310 main.go:141] libmachine: (ha-104111-m03) Calling .DriverName
	I0729 13:35:39.756346  998310 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:35:39.756364  998310 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:35:39.758869  998310 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:35:39.759262  998310 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:35:39.759292  998310 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:35:39.759361  998310 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:35:39.759503  998310 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:35:39.759644  998310 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:35:39.759776  998310 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/id_rsa Username:docker}
	I0729 13:35:39.845317  998310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:35:39.861117  998310 kubeconfig.go:125] found "ha-104111" server: "https://192.168.39.254:8443"
	I0729 13:35:39.861152  998310 api_server.go:166] Checking apiserver status ...
	I0729 13:35:39.861188  998310 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:35:39.875063  998310 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1544/cgroup
	W0729 13:35:39.883964  998310 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1544/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:35:39.884021  998310 ssh_runner.go:195] Run: ls
	I0729 13:35:39.889536  998310 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 13:35:39.893791  998310 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 13:35:39.893815  998310 status.go:422] ha-104111-m03 apiserver status = Running (err=<nil>)
	I0729 13:35:39.893824  998310 status.go:257] ha-104111-m03 status: &{Name:ha-104111-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:35:39.893841  998310 status.go:255] checking status of ha-104111-m04 ...
	I0729 13:35:39.894131  998310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:39.894173  998310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:39.909542  998310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38725
	I0729 13:35:39.909960  998310 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:39.910440  998310 main.go:141] libmachine: Using API Version  1
	I0729 13:35:39.910466  998310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:39.910810  998310 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:39.911023  998310 main.go:141] libmachine: (ha-104111-m04) Calling .GetState
	I0729 13:35:39.912655  998310 status.go:330] ha-104111-m04 host status = "Running" (err=<nil>)
	I0729 13:35:39.912675  998310 host.go:66] Checking if "ha-104111-m04" exists ...
	I0729 13:35:39.913002  998310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:39.913063  998310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:39.927777  998310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41023
	I0729 13:35:39.928162  998310 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:39.928691  998310 main.go:141] libmachine: Using API Version  1
	I0729 13:35:39.928713  998310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:39.929046  998310 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:39.929233  998310 main.go:141] libmachine: (ha-104111-m04) Calling .GetIP
	I0729 13:35:39.932001  998310 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:35:39.932481  998310 main.go:141] libmachine: (ha-104111-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:31:bf", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:31:34 +0000 UTC Type:0 Mac:52:54:00:c2:31:bf Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-104111-m04 Clientid:01:52:54:00:c2:31:bf}
	I0729 13:35:39.932518  998310 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:35:39.932615  998310 host.go:66] Checking if "ha-104111-m04" exists ...
	I0729 13:35:39.932929  998310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:39.932966  998310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:39.947434  998310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39435
	I0729 13:35:39.947784  998310 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:39.948303  998310 main.go:141] libmachine: Using API Version  1
	I0729 13:35:39.948329  998310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:39.948646  998310 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:39.948837  998310 main.go:141] libmachine: (ha-104111-m04) Calling .DriverName
	I0729 13:35:39.949026  998310 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:35:39.949048  998310 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHHostname
	I0729 13:35:39.951575  998310 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:35:39.951971  998310 main.go:141] libmachine: (ha-104111-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:31:bf", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:31:34 +0000 UTC Type:0 Mac:52:54:00:c2:31:bf Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-104111-m04 Clientid:01:52:54:00:c2:31:bf}
	I0729 13:35:39.951995  998310 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:35:39.952116  998310 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHPort
	I0729 13:35:39.952293  998310 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHKeyPath
	I0729 13:35:39.952469  998310 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHUsername
	I0729 13:35:39.952621  998310 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m04/id_rsa Username:docker}
	I0729 13:35:40.031860  998310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:35:40.045477  998310 status.go:257] ha-104111-m04 status: &{Name:ha-104111-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-104111 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-104111 -n ha-104111
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-104111 logs -n 25: (1.362160384s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-104111 cp ha-104111-m03:/home/docker/cp-test.txt                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111:/home/docker/cp-test_ha-104111-m03_ha-104111.txt                       |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n ha-104111 sudo cat                                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-104111-m03_ha-104111.txt                                 |           |         |         |                     |                     |
	| cp      | ha-104111 cp ha-104111-m03:/home/docker/cp-test.txt                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m02:/home/docker/cp-test_ha-104111-m03_ha-104111-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n ha-104111-m02 sudo cat                                          | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-104111-m03_ha-104111-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-104111 cp ha-104111-m03:/home/docker/cp-test.txt                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04:/home/docker/cp-test_ha-104111-m03_ha-104111-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n ha-104111-m04 sudo cat                                          | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-104111-m03_ha-104111-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-104111 cp testdata/cp-test.txt                                                | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-104111 cp ha-104111-m04:/home/docker/cp-test.txt                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3327814908/001/cp-test_ha-104111-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-104111 cp ha-104111-m04:/home/docker/cp-test.txt                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111:/home/docker/cp-test_ha-104111-m04_ha-104111.txt                       |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n ha-104111 sudo cat                                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-104111-m04_ha-104111.txt                                 |           |         |         |                     |                     |
	| cp      | ha-104111 cp ha-104111-m04:/home/docker/cp-test.txt                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m02:/home/docker/cp-test_ha-104111-m04_ha-104111-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n ha-104111-m02 sudo cat                                          | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-104111-m04_ha-104111-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-104111 cp ha-104111-m04:/home/docker/cp-test.txt                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m03:/home/docker/cp-test_ha-104111-m04_ha-104111-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n ha-104111-m03 sudo cat                                          | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-104111-m04_ha-104111-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-104111 node stop m02 -v=7                                                     | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-104111 node start m02 -v=7                                                    | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 13:27:50
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 13:27:50.788594  992950 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:27:50.788711  992950 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:27:50.788720  992950 out.go:304] Setting ErrFile to fd 2...
	I0729 13:27:50.788724  992950 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:27:50.788892  992950 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 13:27:50.789445  992950 out.go:298] Setting JSON to false
	I0729 13:27:50.790362  992950 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11423,"bootTime":1722248248,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:27:50.790420  992950 start.go:139] virtualization: kvm guest
	I0729 13:27:50.792605  992950 out.go:177] * [ha-104111] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:27:50.793994  992950 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 13:27:50.793992  992950 notify.go:220] Checking for updates...
	I0729 13:27:50.796557  992950 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:27:50.798040  992950 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 13:27:50.799414  992950 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 13:27:50.800730  992950 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:27:50.802076  992950 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:27:50.803553  992950 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:27:50.838089  992950 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 13:27:50.839245  992950 start.go:297] selected driver: kvm2
	I0729 13:27:50.839259  992950 start.go:901] validating driver "kvm2" against <nil>
	I0729 13:27:50.839275  992950 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:27:50.840234  992950 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:27:50.840342  992950 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19338-974764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 13:27:50.854536  992950 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 13:27:50.854586  992950 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 13:27:50.854795  992950 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:27:50.854863  992950 cni.go:84] Creating CNI manager for ""
	I0729 13:27:50.854876  992950 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 13:27:50.854887  992950 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 13:27:50.854944  992950 start.go:340] cluster config:
	{Name:ha-104111 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-104111 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0729 13:27:50.855039  992950 iso.go:125] acquiring lock: {Name:mk2bc72146110e230952d77b90cad2ea8182c9d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:27:50.857541  992950 out.go:177] * Starting "ha-104111" primary control-plane node in "ha-104111" cluster
	I0729 13:27:50.858759  992950 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:27:50.858788  992950 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 13:27:50.858798  992950 cache.go:56] Caching tarball of preloaded images
	I0729 13:27:50.858894  992950 preload.go:172] Found /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:27:50.858909  992950 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 13:27:50.859226  992950 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/config.json ...
	I0729 13:27:50.859248  992950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/config.json: {Name:mk83cae594c7e4085d286e1d9eb5152c87251bd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:27:50.859395  992950 start.go:360] acquireMachinesLock for ha-104111: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:27:50.859428  992950 start.go:364] duration metric: took 17.801µs to acquireMachinesLock for "ha-104111"
	I0729 13:27:50.859457  992950 start.go:93] Provisioning new machine with config: &{Name:ha-104111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-104111 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:27:50.859519  992950 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 13:27:50.861740  992950 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 13:27:50.861869  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:27:50.861907  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:27:50.875832  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39293
	I0729 13:27:50.876276  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:27:50.876873  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:27:50.876895  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:27:50.877289  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:27:50.877499  992950 main.go:141] libmachine: (ha-104111) Calling .GetMachineName
	I0729 13:27:50.877719  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:27:50.877902  992950 start.go:159] libmachine.API.Create for "ha-104111" (driver="kvm2")
	I0729 13:27:50.877929  992950 client.go:168] LocalClient.Create starting
	I0729 13:27:50.877972  992950 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem
	I0729 13:27:50.878009  992950 main.go:141] libmachine: Decoding PEM data...
	I0729 13:27:50.878031  992950 main.go:141] libmachine: Parsing certificate...
	I0729 13:27:50.878092  992950 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem
	I0729 13:27:50.878110  992950 main.go:141] libmachine: Decoding PEM data...
	I0729 13:27:50.878122  992950 main.go:141] libmachine: Parsing certificate...
	I0729 13:27:50.878137  992950 main.go:141] libmachine: Running pre-create checks...
	I0729 13:27:50.878150  992950 main.go:141] libmachine: (ha-104111) Calling .PreCreateCheck
	I0729 13:27:50.878538  992950 main.go:141] libmachine: (ha-104111) Calling .GetConfigRaw
	I0729 13:27:50.878889  992950 main.go:141] libmachine: Creating machine...
	I0729 13:27:50.878901  992950 main.go:141] libmachine: (ha-104111) Calling .Create
	I0729 13:27:50.879009  992950 main.go:141] libmachine: (ha-104111) Creating KVM machine...
	I0729 13:27:50.880156  992950 main.go:141] libmachine: (ha-104111) DBG | found existing default KVM network
	I0729 13:27:50.880906  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:50.880790  992973 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015330}
	I0729 13:27:50.880984  992950 main.go:141] libmachine: (ha-104111) DBG | created network xml: 
	I0729 13:27:50.881005  992950 main.go:141] libmachine: (ha-104111) DBG | <network>
	I0729 13:27:50.881017  992950 main.go:141] libmachine: (ha-104111) DBG |   <name>mk-ha-104111</name>
	I0729 13:27:50.881031  992950 main.go:141] libmachine: (ha-104111) DBG |   <dns enable='no'/>
	I0729 13:27:50.881042  992950 main.go:141] libmachine: (ha-104111) DBG |   
	I0729 13:27:50.881053  992950 main.go:141] libmachine: (ha-104111) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 13:27:50.881065  992950 main.go:141] libmachine: (ha-104111) DBG |     <dhcp>
	I0729 13:27:50.881078  992950 main.go:141] libmachine: (ha-104111) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 13:27:50.881090  992950 main.go:141] libmachine: (ha-104111) DBG |     </dhcp>
	I0729 13:27:50.881100  992950 main.go:141] libmachine: (ha-104111) DBG |   </ip>
	I0729 13:27:50.881121  992950 main.go:141] libmachine: (ha-104111) DBG |   
	I0729 13:27:50.881139  992950 main.go:141] libmachine: (ha-104111) DBG | </network>
	I0729 13:27:50.881154  992950 main.go:141] libmachine: (ha-104111) DBG | 
	I0729 13:27:50.885676  992950 main.go:141] libmachine: (ha-104111) DBG | trying to create private KVM network mk-ha-104111 192.168.39.0/24...
	I0729 13:27:50.951493  992950 main.go:141] libmachine: (ha-104111) DBG | private KVM network mk-ha-104111 192.168.39.0/24 created
	I0729 13:27:50.951527  992950 main.go:141] libmachine: (ha-104111) Setting up store path in /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111 ...
	I0729 13:27:50.951542  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:50.951462  992973 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 13:27:50.951555  992950 main.go:141] libmachine: (ha-104111) Building disk image from file:///home/jenkins/minikube-integration/19338-974764/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 13:27:50.951576  992950 main.go:141] libmachine: (ha-104111) Downloading /home/jenkins/minikube-integration/19338-974764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19338-974764/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 13:27:51.230142  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:51.230009  992973 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa...
	I0729 13:27:51.778580  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:51.778412  992973 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/ha-104111.rawdisk...
	I0729 13:27:51.778629  992950 main.go:141] libmachine: (ha-104111) DBG | Writing magic tar header
	I0729 13:27:51.778645  992950 main.go:141] libmachine: (ha-104111) DBG | Writing SSH key tar header
	I0729 13:27:51.778661  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:51.778578  992973 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111 ...
	I0729 13:27:51.778743  992950 main.go:141] libmachine: (ha-104111) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111
	I0729 13:27:51.778771  992950 main.go:141] libmachine: (ha-104111) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube/machines
	I0729 13:27:51.778783  992950 main.go:141] libmachine: (ha-104111) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111 (perms=drwx------)
	I0729 13:27:51.778795  992950 main.go:141] libmachine: (ha-104111) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube/machines (perms=drwxr-xr-x)
	I0729 13:27:51.778808  992950 main.go:141] libmachine: (ha-104111) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube (perms=drwxr-xr-x)
	I0729 13:27:51.778818  992950 main.go:141] libmachine: (ha-104111) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 13:27:51.778834  992950 main.go:141] libmachine: (ha-104111) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764
	I0729 13:27:51.778875  992950 main.go:141] libmachine: (ha-104111) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 13:27:51.778890  992950 main.go:141] libmachine: (ha-104111) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764 (perms=drwxrwxr-x)
	I0729 13:27:51.778907  992950 main.go:141] libmachine: (ha-104111) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 13:27:51.778919  992950 main.go:141] libmachine: (ha-104111) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 13:27:51.778930  992950 main.go:141] libmachine: (ha-104111) DBG | Checking permissions on dir: /home/jenkins
	I0729 13:27:51.778944  992950 main.go:141] libmachine: (ha-104111) DBG | Checking permissions on dir: /home
	I0729 13:27:51.778955  992950 main.go:141] libmachine: (ha-104111) DBG | Skipping /home - not owner
	I0729 13:27:51.778969  992950 main.go:141] libmachine: (ha-104111) Creating domain...
	I0729 13:27:51.780145  992950 main.go:141] libmachine: (ha-104111) define libvirt domain using xml: 
	I0729 13:27:51.780169  992950 main.go:141] libmachine: (ha-104111) <domain type='kvm'>
	I0729 13:27:51.780176  992950 main.go:141] libmachine: (ha-104111)   <name>ha-104111</name>
	I0729 13:27:51.780184  992950 main.go:141] libmachine: (ha-104111)   <memory unit='MiB'>2200</memory>
	I0729 13:27:51.780212  992950 main.go:141] libmachine: (ha-104111)   <vcpu>2</vcpu>
	I0729 13:27:51.780234  992950 main.go:141] libmachine: (ha-104111)   <features>
	I0729 13:27:51.780241  992950 main.go:141] libmachine: (ha-104111)     <acpi/>
	I0729 13:27:51.780248  992950 main.go:141] libmachine: (ha-104111)     <apic/>
	I0729 13:27:51.780275  992950 main.go:141] libmachine: (ha-104111)     <pae/>
	I0729 13:27:51.780296  992950 main.go:141] libmachine: (ha-104111)     
	I0729 13:27:51.780308  992950 main.go:141] libmachine: (ha-104111)   </features>
	I0729 13:27:51.780319  992950 main.go:141] libmachine: (ha-104111)   <cpu mode='host-passthrough'>
	I0729 13:27:51.780327  992950 main.go:141] libmachine: (ha-104111)   
	I0729 13:27:51.780337  992950 main.go:141] libmachine: (ha-104111)   </cpu>
	I0729 13:27:51.780347  992950 main.go:141] libmachine: (ha-104111)   <os>
	I0729 13:27:51.780358  992950 main.go:141] libmachine: (ha-104111)     <type>hvm</type>
	I0729 13:27:51.780369  992950 main.go:141] libmachine: (ha-104111)     <boot dev='cdrom'/>
	I0729 13:27:51.780381  992950 main.go:141] libmachine: (ha-104111)     <boot dev='hd'/>
	I0729 13:27:51.780388  992950 main.go:141] libmachine: (ha-104111)     <bootmenu enable='no'/>
	I0729 13:27:51.780398  992950 main.go:141] libmachine: (ha-104111)   </os>
	I0729 13:27:51.780423  992950 main.go:141] libmachine: (ha-104111)   <devices>
	I0729 13:27:51.780436  992950 main.go:141] libmachine: (ha-104111)     <disk type='file' device='cdrom'>
	I0729 13:27:51.780453  992950 main.go:141] libmachine: (ha-104111)       <source file='/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/boot2docker.iso'/>
	I0729 13:27:51.780475  992950 main.go:141] libmachine: (ha-104111)       <target dev='hdc' bus='scsi'/>
	I0729 13:27:51.780487  992950 main.go:141] libmachine: (ha-104111)       <readonly/>
	I0729 13:27:51.780503  992950 main.go:141] libmachine: (ha-104111)     </disk>
	I0729 13:27:51.780538  992950 main.go:141] libmachine: (ha-104111)     <disk type='file' device='disk'>
	I0729 13:27:51.780565  992950 main.go:141] libmachine: (ha-104111)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 13:27:51.780584  992950 main.go:141] libmachine: (ha-104111)       <source file='/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/ha-104111.rawdisk'/>
	I0729 13:27:51.780594  992950 main.go:141] libmachine: (ha-104111)       <target dev='hda' bus='virtio'/>
	I0729 13:27:51.780606  992950 main.go:141] libmachine: (ha-104111)     </disk>
	I0729 13:27:51.780617  992950 main.go:141] libmachine: (ha-104111)     <interface type='network'>
	I0729 13:27:51.780626  992950 main.go:141] libmachine: (ha-104111)       <source network='mk-ha-104111'/>
	I0729 13:27:51.780636  992950 main.go:141] libmachine: (ha-104111)       <model type='virtio'/>
	I0729 13:27:51.780647  992950 main.go:141] libmachine: (ha-104111)     </interface>
	I0729 13:27:51.780656  992950 main.go:141] libmachine: (ha-104111)     <interface type='network'>
	I0729 13:27:51.780668  992950 main.go:141] libmachine: (ha-104111)       <source network='default'/>
	I0729 13:27:51.780677  992950 main.go:141] libmachine: (ha-104111)       <model type='virtio'/>
	I0729 13:27:51.780689  992950 main.go:141] libmachine: (ha-104111)     </interface>
	I0729 13:27:51.780699  992950 main.go:141] libmachine: (ha-104111)     <serial type='pty'>
	I0729 13:27:51.780710  992950 main.go:141] libmachine: (ha-104111)       <target port='0'/>
	I0729 13:27:51.780720  992950 main.go:141] libmachine: (ha-104111)     </serial>
	I0729 13:27:51.780743  992950 main.go:141] libmachine: (ha-104111)     <console type='pty'>
	I0729 13:27:51.780764  992950 main.go:141] libmachine: (ha-104111)       <target type='serial' port='0'/>
	I0729 13:27:51.780779  992950 main.go:141] libmachine: (ha-104111)     </console>
	I0729 13:27:51.780788  992950 main.go:141] libmachine: (ha-104111)     <rng model='virtio'>
	I0729 13:27:51.780799  992950 main.go:141] libmachine: (ha-104111)       <backend model='random'>/dev/random</backend>
	I0729 13:27:51.780809  992950 main.go:141] libmachine: (ha-104111)     </rng>
	I0729 13:27:51.780820  992950 main.go:141] libmachine: (ha-104111)     
	I0729 13:27:51.780839  992950 main.go:141] libmachine: (ha-104111)     
	I0729 13:27:51.780850  992950 main.go:141] libmachine: (ha-104111)   </devices>
	I0729 13:27:51.780859  992950 main.go:141] libmachine: (ha-104111) </domain>
	I0729 13:27:51.780869  992950 main.go:141] libmachine: (ha-104111) 
	I0729 13:27:51.784970  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:56:90:6c in network default
	I0729 13:27:51.785491  992950 main.go:141] libmachine: (ha-104111) Ensuring networks are active...
	I0729 13:27:51.785516  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:27:51.786082  992950 main.go:141] libmachine: (ha-104111) Ensuring network default is active
	I0729 13:27:51.786356  992950 main.go:141] libmachine: (ha-104111) Ensuring network mk-ha-104111 is active
	I0729 13:27:51.786802  992950 main.go:141] libmachine: (ha-104111) Getting domain xml...
	I0729 13:27:51.787733  992950 main.go:141] libmachine: (ha-104111) Creating domain...
	I0729 13:27:52.105613  992950 main.go:141] libmachine: (ha-104111) Waiting to get IP...
	I0729 13:27:52.106450  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:27:52.106796  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:27:52.106823  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:52.106778  992973 retry.go:31] will retry after 241.772351ms: waiting for machine to come up
	I0729 13:27:52.350209  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:27:52.350683  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:27:52.350711  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:52.350631  992973 retry.go:31] will retry after 337.465105ms: waiting for machine to come up
	I0729 13:27:52.690197  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:27:52.690572  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:27:52.690605  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:52.690510  992973 retry.go:31] will retry after 387.904142ms: waiting for machine to come up
	I0729 13:27:53.080125  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:27:53.080538  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:27:53.080567  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:53.080465  992973 retry.go:31] will retry after 487.916897ms: waiting for machine to come up
	I0729 13:27:53.570315  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:27:53.570738  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:27:53.570767  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:53.570686  992973 retry.go:31] will retry after 466.286646ms: waiting for machine to come up
	I0729 13:27:54.038226  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:27:54.038676  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:27:54.038721  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:54.038635  992973 retry.go:31] will retry after 815.865488ms: waiting for machine to come up
	I0729 13:27:54.856028  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:27:54.856378  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:27:54.856438  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:54.856337  992973 retry.go:31] will retry after 972.389168ms: waiting for machine to come up
	I0729 13:27:55.830484  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:27:55.830991  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:27:55.831018  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:55.830938  992973 retry.go:31] will retry after 1.143318078s: waiting for machine to come up
	I0729 13:27:56.975732  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:27:56.976170  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:27:56.976194  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:56.976118  992973 retry.go:31] will retry after 1.842354399s: waiting for machine to come up
	I0729 13:27:58.821254  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:27:58.821629  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:27:58.821659  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:27:58.821581  992973 retry.go:31] will retry after 1.46639238s: waiting for machine to come up
	I0729 13:28:00.290154  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:00.290479  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:28:00.290511  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:28:00.290422  992973 retry.go:31] will retry after 2.370742002s: waiting for machine to come up
	I0729 13:28:02.663791  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:02.664211  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:28:02.664241  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:28:02.664148  992973 retry.go:31] will retry after 2.99875569s: waiting for machine to come up
	I0729 13:28:05.666325  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:05.666722  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:28:05.666748  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:28:05.666682  992973 retry.go:31] will retry after 3.701072815s: waiting for machine to come up
	I0729 13:28:09.371868  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:09.372285  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find current IP address of domain ha-104111 in network mk-ha-104111
	I0729 13:28:09.372311  992950 main.go:141] libmachine: (ha-104111) DBG | I0729 13:28:09.372241  992973 retry.go:31] will retry after 5.605611611s: waiting for machine to come up
	I0729 13:28:14.983056  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:14.983474  992950 main.go:141] libmachine: (ha-104111) Found IP for machine: 192.168.39.120
	I0729 13:28:14.983490  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has current primary IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:14.983496  992950 main.go:141] libmachine: (ha-104111) Reserving static IP address...
	I0729 13:28:14.983848  992950 main.go:141] libmachine: (ha-104111) DBG | unable to find host DHCP lease matching {name: "ha-104111", mac: "52:54:00:44:4b:6b", ip: "192.168.39.120"} in network mk-ha-104111
	I0729 13:28:15.055152  992950 main.go:141] libmachine: (ha-104111) Reserved static IP address: 192.168.39.120
	I0729 13:28:15.055179  992950 main.go:141] libmachine: (ha-104111) Waiting for SSH to be available...
	I0729 13:28:15.055188  992950 main.go:141] libmachine: (ha-104111) DBG | Getting to WaitForSSH function...
	I0729 13:28:15.058104  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.058535  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:minikube Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:15.058566  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.058694  992950 main.go:141] libmachine: (ha-104111) DBG | Using SSH client type: external
	I0729 13:28:15.058720  992950 main.go:141] libmachine: (ha-104111) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa (-rw-------)
	I0729 13:28:15.058739  992950 main.go:141] libmachine: (ha-104111) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.120 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:28:15.058761  992950 main.go:141] libmachine: (ha-104111) DBG | About to run SSH command:
	I0729 13:28:15.058783  992950 main.go:141] libmachine: (ha-104111) DBG | exit 0
	I0729 13:28:15.184333  992950 main.go:141] libmachine: (ha-104111) DBG | SSH cmd err, output: <nil>: 
	I0729 13:28:15.184642  992950 main.go:141] libmachine: (ha-104111) KVM machine creation complete!
	I0729 13:28:15.184936  992950 main.go:141] libmachine: (ha-104111) Calling .GetConfigRaw
	I0729 13:28:15.185506  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:28:15.185699  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:28:15.185826  992950 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 13:28:15.185841  992950 main.go:141] libmachine: (ha-104111) Calling .GetState
	I0729 13:28:15.187000  992950 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 13:28:15.187017  992950 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 13:28:15.187025  992950 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 13:28:15.187032  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:28:15.189218  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.189563  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:15.189587  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.189708  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:28:15.189900  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:15.190068  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:15.190193  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:28:15.190337  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:28:15.190582  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 13:28:15.190595  992950 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 13:28:15.299583  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:28:15.299618  992950 main.go:141] libmachine: Detecting the provisioner...
	I0729 13:28:15.299630  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:28:15.302494  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.302852  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:15.302883  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.302992  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:28:15.303196  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:15.303396  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:15.303552  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:28:15.303714  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:28:15.303892  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 13:28:15.303904  992950 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 13:28:15.412926  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 13:28:15.413014  992950 main.go:141] libmachine: found compatible host: buildroot
	I0729 13:28:15.413026  992950 main.go:141] libmachine: Provisioning with buildroot...
	I0729 13:28:15.413033  992950 main.go:141] libmachine: (ha-104111) Calling .GetMachineName
	I0729 13:28:15.413297  992950 buildroot.go:166] provisioning hostname "ha-104111"
	I0729 13:28:15.413328  992950 main.go:141] libmachine: (ha-104111) Calling .GetMachineName
	I0729 13:28:15.413518  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:28:15.416085  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.416327  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:15.416350  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.416526  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:28:15.416700  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:15.416856  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:15.416992  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:28:15.417113  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:28:15.417303  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 13:28:15.417317  992950 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-104111 && echo "ha-104111" | sudo tee /etc/hostname
	I0729 13:28:15.538759  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-104111
	
	I0729 13:28:15.538794  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:28:15.541286  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.541625  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:15.541651  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.541797  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:28:15.541965  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:15.542104  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:15.542283  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:28:15.542420  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:28:15.542627  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 13:28:15.542648  992950 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-104111' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-104111/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-104111' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:28:15.661668  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:28:15.661699  992950 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 13:28:15.661719  992950 buildroot.go:174] setting up certificates
	I0729 13:28:15.661730  992950 provision.go:84] configureAuth start
	I0729 13:28:15.661739  992950 main.go:141] libmachine: (ha-104111) Calling .GetMachineName
	I0729 13:28:15.662041  992950 main.go:141] libmachine: (ha-104111) Calling .GetIP
	I0729 13:28:15.664715  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.665028  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:15.665070  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.665202  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:28:15.667336  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.667669  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:15.667698  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.667817  992950 provision.go:143] copyHostCerts
	I0729 13:28:15.667848  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 13:28:15.667888  992950 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 13:28:15.667898  992950 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 13:28:15.667967  992950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 13:28:15.668070  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 13:28:15.668090  992950 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 13:28:15.668097  992950 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 13:28:15.668121  992950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 13:28:15.668177  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 13:28:15.668193  992950 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 13:28:15.668201  992950 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 13:28:15.668223  992950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 13:28:15.668289  992950 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.ha-104111 san=[127.0.0.1 192.168.39.120 ha-104111 localhost minikube]
	I0729 13:28:15.745030  992950 provision.go:177] copyRemoteCerts
	I0729 13:28:15.745104  992950 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:28:15.745130  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:28:15.747826  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.748132  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:15.748155  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.748305  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:28:15.748534  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:15.748704  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:28:15.748814  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:28:15.834722  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 13:28:15.834800  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:28:15.858258  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 13:28:15.858319  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0729 13:28:15.880816  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 13:28:15.880885  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 13:28:15.903313  992950 provision.go:87] duration metric: took 241.568168ms to configureAuth
	I0729 13:28:15.903338  992950 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:28:15.903546  992950 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:28:15.903651  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:28:15.906022  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.906348  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:15.906377  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:15.906480  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:28:15.906698  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:15.906854  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:15.906988  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:28:15.907116  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:28:15.907270  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 13:28:15.907284  992950 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:28:16.174285  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:28:16.174337  992950 main.go:141] libmachine: Checking connection to Docker...
	I0729 13:28:16.174347  992950 main.go:141] libmachine: (ha-104111) Calling .GetURL
	I0729 13:28:16.175719  992950 main.go:141] libmachine: (ha-104111) DBG | Using libvirt version 6000000
	I0729 13:28:16.177617  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.177975  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:16.177996  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.178211  992950 main.go:141] libmachine: Docker is up and running!
	I0729 13:28:16.178227  992950 main.go:141] libmachine: Reticulating splines...
	I0729 13:28:16.178234  992950 client.go:171] duration metric: took 25.300294822s to LocalClient.Create
	I0729 13:28:16.178257  992950 start.go:167] duration metric: took 25.300358917s to libmachine.API.Create "ha-104111"
	I0729 13:28:16.178267  992950 start.go:293] postStartSetup for "ha-104111" (driver="kvm2")
	I0729 13:28:16.178277  992950 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:28:16.178312  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:28:16.178559  992950 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:28:16.178603  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:28:16.180432  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.180790  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:16.180815  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.180971  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:28:16.181146  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:16.181307  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:28:16.181441  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:28:16.266860  992950 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:28:16.271002  992950 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:28:16.271024  992950 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 13:28:16.271102  992950 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 13:28:16.271194  992950 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 13:28:16.271206  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> /etc/ssl/certs/9820462.pem
	I0729 13:28:16.271336  992950 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:28:16.280902  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 13:28:16.303532  992950 start.go:296] duration metric: took 125.253889ms for postStartSetup
	I0729 13:28:16.303585  992950 main.go:141] libmachine: (ha-104111) Calling .GetConfigRaw
	I0729 13:28:16.304161  992950 main.go:141] libmachine: (ha-104111) Calling .GetIP
	I0729 13:28:16.306579  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.306900  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:16.306926  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.307255  992950 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/config.json ...
	I0729 13:28:16.307422  992950 start.go:128] duration metric: took 25.447892576s to createHost
	I0729 13:28:16.307517  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:28:16.309538  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.309806  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:16.309833  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.309947  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:28:16.310134  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:16.310254  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:16.310380  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:28:16.310505  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:28:16.310696  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 13:28:16.310711  992950 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:28:16.421166  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722259696.378704676
	
	I0729 13:28:16.421193  992950 fix.go:216] guest clock: 1722259696.378704676
	I0729 13:28:16.421200  992950 fix.go:229] Guest: 2024-07-29 13:28:16.378704676 +0000 UTC Remote: 2024-07-29 13:28:16.307433053 +0000 UTC m=+25.553437361 (delta=71.271623ms)
	I0729 13:28:16.421219  992950 fix.go:200] guest clock delta is within tolerance: 71.271623ms
	I0729 13:28:16.421225  992950 start.go:83] releasing machines lock for "ha-104111", held for 25.56178633s
	I0729 13:28:16.421244  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:28:16.421537  992950 main.go:141] libmachine: (ha-104111) Calling .GetIP
	I0729 13:28:16.424050  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.424391  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:16.424439  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.424587  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:28:16.425069  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:28:16.425247  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:28:16.425376  992950 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:28:16.425427  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:28:16.425498  992950 ssh_runner.go:195] Run: cat /version.json
	I0729 13:28:16.425525  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:28:16.427791  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.428091  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:16.428116  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.428202  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.428218  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:28:16.428381  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:16.428565  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:28:16.428621  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:16.428641  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:16.428729  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:28:16.428831  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:28:16.428986  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:16.429148  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:28:16.429271  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:28:16.534145  992950 ssh_runner.go:195] Run: systemctl --version
	I0729 13:28:16.539884  992950 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:28:16.694793  992950 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:28:16.700796  992950 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:28:16.700850  992950 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:28:16.715914  992950 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:28:16.715935  992950 start.go:495] detecting cgroup driver to use...
	I0729 13:28:16.715997  992950 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:28:16.732889  992950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:28:16.746761  992950 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:28:16.746830  992950 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:28:16.759847  992950 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:28:16.772834  992950 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:28:16.884479  992950 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:28:17.022280  992950 docker.go:233] disabling docker service ...
	I0729 13:28:17.022353  992950 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:28:17.036668  992950 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:28:17.049204  992950 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:28:17.177884  992950 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:28:17.296976  992950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:28:17.310302  992950 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:28:17.327927  992950 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:28:17.327986  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:28:17.337890  992950 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:28:17.337961  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:28:17.347740  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:28:17.357142  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:28:17.367108  992950 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:28:17.377001  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:28:17.386599  992950 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:28:17.403144  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:28:17.413026  992950 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:28:17.421978  992950 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:28:17.422020  992950 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:28:17.434078  992950 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:28:17.442867  992950 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:28:17.567268  992950 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:28:17.707576  992950 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:28:17.707669  992950 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:28:17.712696  992950 start.go:563] Will wait 60s for crictl version
	I0729 13:28:17.712753  992950 ssh_runner.go:195] Run: which crictl
	I0729 13:28:17.716312  992950 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:28:17.754087  992950 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:28:17.754174  992950 ssh_runner.go:195] Run: crio --version
	I0729 13:28:17.783004  992950 ssh_runner.go:195] Run: crio --version
	I0729 13:28:17.813418  992950 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:28:17.814636  992950 main.go:141] libmachine: (ha-104111) Calling .GetIP
	I0729 13:28:17.816916  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:17.817297  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:17.817315  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:17.817564  992950 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 13:28:17.821740  992950 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:28:17.834267  992950 kubeadm.go:883] updating cluster {Name:ha-104111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-104111 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:28:17.834380  992950 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:28:17.834420  992950 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:28:17.865819  992950 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 13:28:17.865905  992950 ssh_runner.go:195] Run: which lz4
	I0729 13:28:17.869709  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0729 13:28:17.869806  992950 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 13:28:17.873918  992950 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 13:28:17.873958  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 13:28:19.218990  992950 crio.go:462] duration metric: took 1.349212991s to copy over tarball
	I0729 13:28:19.219082  992950 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 13:28:21.351652  992950 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.132536185s)
	I0729 13:28:21.351681  992950 crio.go:469] duration metric: took 2.132659596s to extract the tarball
	I0729 13:28:21.351689  992950 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 13:28:21.389185  992950 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:28:21.438048  992950 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:28:21.438073  992950 cache_images.go:84] Images are preloaded, skipping loading
	I0729 13:28:21.438083  992950 kubeadm.go:934] updating node { 192.168.39.120 8443 v1.30.3 crio true true} ...
	I0729 13:28:21.438242  992950 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-104111 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-104111 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:28:21.438324  992950 ssh_runner.go:195] Run: crio config
	I0729 13:28:21.483647  992950 cni.go:84] Creating CNI manager for ""
	I0729 13:28:21.483671  992950 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 13:28:21.483680  992950 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:28:21.483703  992950 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.120 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-104111 NodeName:ha-104111 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:28:21.483857  992950 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-104111"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.120
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.120"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:28:21.483889  992950 kube-vip.go:115] generating kube-vip config ...
	I0729 13:28:21.483932  992950 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 13:28:21.503075  992950 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 13:28:21.503238  992950 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0729 13:28:21.503310  992950 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 13:28:21.513555  992950 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:28:21.513634  992950 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 13:28:21.523417  992950 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 13:28:21.539906  992950 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:28:21.556171  992950 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 13:28:21.572306  992950 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0729 13:28:21.588601  992950 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 13:28:21.592303  992950 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:28:21.604503  992950 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:28:21.732996  992950 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:28:21.751100  992950 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111 for IP: 192.168.39.120
	I0729 13:28:21.751125  992950 certs.go:194] generating shared ca certs ...
	I0729 13:28:21.751141  992950 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:28:21.751320  992950 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 13:28:21.751382  992950 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 13:28:21.751396  992950 certs.go:256] generating profile certs ...
	I0729 13:28:21.751456  992950 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.key
	I0729 13:28:21.751472  992950 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.crt with IP's: []
	I0729 13:28:22.105163  992950 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.crt ...
	I0729 13:28:22.105196  992950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.crt: {Name:mkc3fe2e5d41e3efc36f038ba4c6055663b8dc02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:28:22.105368  992950 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.key ...
	I0729 13:28:22.105378  992950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.key: {Name:mk3b1e6250db3fab7db8560f50a7c8f8313bd412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:28:22.105452  992950 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.bdd936fb
	I0729 13:28:22.105467  992950 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.bdd936fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.120 192.168.39.254]
	I0729 13:28:22.236602  992950 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.bdd936fb ...
	I0729 13:28:22.236632  992950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.bdd936fb: {Name:mk2839a37647f2d64573698795d7cf40367c9e2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:28:22.236786  992950 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.bdd936fb ...
	I0729 13:28:22.236800  992950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.bdd936fb: {Name:mk9ce3510a255664f7a806593cc42fe59a2e626d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:28:22.236872  992950 certs.go:381] copying /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.bdd936fb -> /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt
	I0729 13:28:22.236966  992950 certs.go:385] copying /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.bdd936fb -> /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key
	I0729 13:28:22.237027  992950 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key
	I0729 13:28:22.237047  992950 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.crt with IP's: []
	I0729 13:28:22.293825  992950 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.crt ...
	I0729 13:28:22.293857  992950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.crt: {Name:mk68f198bf63d825af64973559bb29938c0cec2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:28:22.294027  992950 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key ...
	I0729 13:28:22.294038  992950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key: {Name:mk492dd4f00a939c2ebdc925d86fe11b5b3b16fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:28:22.294112  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 13:28:22.294129  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 13:28:22.294142  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 13:28:22.294154  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 13:28:22.294165  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 13:28:22.294177  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 13:28:22.294189  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 13:28:22.294200  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 13:28:22.294260  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 13:28:22.294293  992950 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 13:28:22.294303  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:28:22.294328  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:28:22.294350  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:28:22.294373  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 13:28:22.294407  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 13:28:22.294433  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> /usr/share/ca-certificates/9820462.pem
	I0729 13:28:22.294446  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:28:22.294457  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem -> /usr/share/ca-certificates/982046.pem
	I0729 13:28:22.295038  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:28:22.321141  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:28:22.345483  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:28:22.368908  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 13:28:22.391452  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 13:28:22.414268  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 13:28:22.436942  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:28:22.459693  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 13:28:22.481930  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 13:28:22.504273  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:28:22.529719  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 13:28:22.552777  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:28:22.569738  992950 ssh_runner.go:195] Run: openssl version
	I0729 13:28:22.575826  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 13:28:22.586583  992950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 13:28:22.592046  992950 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 13:28:22.592145  992950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 13:28:22.597882  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:28:22.608836  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:28:22.619552  992950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:28:22.624002  992950 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:28:22.624055  992950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:28:22.629501  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:28:22.640082  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 13:28:22.650915  992950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 13:28:22.654989  992950 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 13:28:22.655049  992950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 13:28:22.660522  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 13:28:22.671050  992950 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:28:22.674773  992950 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 13:28:22.674837  992950 kubeadm.go:392] StartCluster: {Name:ha-104111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-104111 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:28:22.674926  992950 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:28:22.674982  992950 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:28:22.713478  992950 cri.go:89] found id: ""
	I0729 13:28:22.713543  992950 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 13:28:22.723732  992950 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 13:28:22.736813  992950 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 13:28:22.747594  992950 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 13:28:22.747627  992950 kubeadm.go:157] found existing configuration files:
	
	I0729 13:28:22.747680  992950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 13:28:22.757528  992950 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 13:28:22.757590  992950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 13:28:22.768035  992950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 13:28:22.777779  992950 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 13:28:22.777834  992950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 13:28:22.787666  992950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 13:28:22.797092  992950 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 13:28:22.797144  992950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 13:28:22.806932  992950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 13:28:22.816137  992950 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 13:28:22.816184  992950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 13:28:22.825638  992950 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 13:28:23.067098  992950 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 13:28:33.934512  992950 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 13:28:33.934579  992950 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 13:28:33.934734  992950 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 13:28:33.934887  992950 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 13:28:33.934981  992950 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 13:28:33.935067  992950 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 13:28:33.936487  992950 out.go:204]   - Generating certificates and keys ...
	I0729 13:28:33.936597  992950 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 13:28:33.936688  992950 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 13:28:33.936793  992950 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 13:28:33.936886  992950 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 13:28:33.936969  992950 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 13:28:33.937034  992950 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 13:28:33.937131  992950 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 13:28:33.937308  992950 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-104111 localhost] and IPs [192.168.39.120 127.0.0.1 ::1]
	I0729 13:28:33.937399  992950 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 13:28:33.937555  992950 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-104111 localhost] and IPs [192.168.39.120 127.0.0.1 ::1]
	I0729 13:28:33.937654  992950 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 13:28:33.937736  992950 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 13:28:33.937815  992950 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 13:28:33.937897  992950 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 13:28:33.937961  992950 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 13:28:33.938042  992950 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 13:28:33.938140  992950 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 13:28:33.938235  992950 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 13:28:33.938314  992950 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 13:28:33.938428  992950 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 13:28:33.938503  992950 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 13:28:33.940206  992950 out.go:204]   - Booting up control plane ...
	I0729 13:28:33.940284  992950 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 13:28:33.940350  992950 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 13:28:33.940420  992950 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 13:28:33.940549  992950 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 13:28:33.940701  992950 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 13:28:33.940753  992950 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 13:28:33.940941  992950 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 13:28:33.941052  992950 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 13:28:33.941135  992950 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001569089s
	I0729 13:28:33.941215  992950 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 13:28:33.941269  992950 kubeadm.go:310] [api-check] The API server is healthy after 5.806345069s
	I0729 13:28:33.941372  992950 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 13:28:33.941474  992950 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 13:28:33.941536  992950 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 13:28:33.941692  992950 kubeadm.go:310] [mark-control-plane] Marking the node ha-104111 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 13:28:33.941743  992950 kubeadm.go:310] [bootstrap-token] Using token: xdwoky.ewd6hddagkcpyjfo
	I0729 13:28:33.942935  992950 out.go:204]   - Configuring RBAC rules ...
	I0729 13:28:33.943017  992950 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 13:28:33.943085  992950 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 13:28:33.943232  992950 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 13:28:33.943352  992950 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 13:28:33.943451  992950 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 13:28:33.943522  992950 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 13:28:33.943616  992950 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 13:28:33.943657  992950 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 13:28:33.943695  992950 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 13:28:33.943701  992950 kubeadm.go:310] 
	I0729 13:28:33.943749  992950 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 13:28:33.943755  992950 kubeadm.go:310] 
	I0729 13:28:33.943821  992950 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 13:28:33.943830  992950 kubeadm.go:310] 
	I0729 13:28:33.943883  992950 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 13:28:33.943964  992950 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 13:28:33.944031  992950 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 13:28:33.944041  992950 kubeadm.go:310] 
	I0729 13:28:33.944095  992950 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 13:28:33.944101  992950 kubeadm.go:310] 
	I0729 13:28:33.944139  992950 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 13:28:33.944147  992950 kubeadm.go:310] 
	I0729 13:28:33.944209  992950 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 13:28:33.944291  992950 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 13:28:33.944362  992950 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 13:28:33.944371  992950 kubeadm.go:310] 
	I0729 13:28:33.944457  992950 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 13:28:33.944527  992950 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 13:28:33.944533  992950 kubeadm.go:310] 
	I0729 13:28:33.944604  992950 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xdwoky.ewd6hddagkcpyjfo \
	I0729 13:28:33.944708  992950 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 \
	I0729 13:28:33.944733  992950 kubeadm.go:310] 	--control-plane 
	I0729 13:28:33.944737  992950 kubeadm.go:310] 
	I0729 13:28:33.944818  992950 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 13:28:33.944825  992950 kubeadm.go:310] 
	I0729 13:28:33.944894  992950 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xdwoky.ewd6hddagkcpyjfo \
	I0729 13:28:33.944991  992950 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 
	I0729 13:28:33.945005  992950 cni.go:84] Creating CNI manager for ""
	I0729 13:28:33.945012  992950 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 13:28:33.946411  992950 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0729 13:28:33.947617  992950 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0729 13:28:33.954558  992950 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0729 13:28:33.954583  992950 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0729 13:28:33.977635  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0729 13:28:34.350635  992950 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 13:28:34.350736  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:34.350773  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-104111 minikube.k8s.io/updated_at=2024_07_29T13_28_34_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411 minikube.k8s.io/name=ha-104111 minikube.k8s.io/primary=true
	I0729 13:28:34.565795  992950 ops.go:34] apiserver oom_adj: -16
	I0729 13:28:34.566016  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:35.066116  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:35.566930  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:36.066139  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:36.566669  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:37.066772  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:37.566404  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:38.066899  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:38.566503  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:39.066796  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:39.566354  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:40.066322  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:40.566701  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:41.066952  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:41.566851  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:42.066976  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:42.566510  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:43.066525  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:43.566299  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:44.066428  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:44.566724  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:45.066936  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:45.566316  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:46.066630  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 13:28:46.202876  992950 kubeadm.go:1113] duration metric: took 11.852222314s to wait for elevateKubeSystemPrivileges
	I0729 13:28:46.202909  992950 kubeadm.go:394] duration metric: took 23.528078246s to StartCluster
	I0729 13:28:46.202936  992950 settings.go:142] acquiring lock: {Name:mke61e73d7bb1a5bd9c2f4c9e9bba0a07b199ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:28:46.203057  992950 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 13:28:46.204061  992950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:28:46.204293  992950 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 13:28:46.204333  992950 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:28:46.204365  992950 start.go:241] waiting for startup goroutines ...
	I0729 13:28:46.204378  992950 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 13:28:46.204472  992950 addons.go:69] Setting storage-provisioner=true in profile "ha-104111"
	I0729 13:28:46.204481  992950 addons.go:69] Setting default-storageclass=true in profile "ha-104111"
	I0729 13:28:46.204521  992950 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-104111"
	I0729 13:28:46.204600  992950 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:28:46.204522  992950 addons.go:234] Setting addon storage-provisioner=true in "ha-104111"
	I0729 13:28:46.204656  992950 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:28:46.204992  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:28:46.205027  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:28:46.205067  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:28:46.205100  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:28:46.220591  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44459
	I0729 13:28:46.220885  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40515
	I0729 13:28:46.221108  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:28:46.221364  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:28:46.221622  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:28:46.221641  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:28:46.221934  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:28:46.221961  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:28:46.222000  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:28:46.222192  992950 main.go:141] libmachine: (ha-104111) Calling .GetState
	I0729 13:28:46.222244  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:28:46.222702  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:28:46.222731  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:28:46.224507  992950 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 13:28:46.224850  992950 kapi.go:59] client config for ha-104111: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.crt", KeyFile:"/home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.key", CAFile:"/home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 13:28:46.225375  992950 cert_rotation.go:137] Starting client certificate rotation controller
	I0729 13:28:46.225638  992950 addons.go:234] Setting addon default-storageclass=true in "ha-104111"
	I0729 13:28:46.225684  992950 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:28:46.226049  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:28:46.226078  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:28:46.237878  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39635
	I0729 13:28:46.238450  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:28:46.239005  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:28:46.239032  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:28:46.239471  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:28:46.239692  992950 main.go:141] libmachine: (ha-104111) Calling .GetState
	I0729 13:28:46.241072  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38421
	I0729 13:28:46.241493  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:28:46.241561  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:28:46.241990  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:28:46.242009  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:28:46.242346  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:28:46.242823  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:28:46.242863  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:28:46.243202  992950 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 13:28:46.244666  992950 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:28:46.244690  992950 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 13:28:46.244721  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:28:46.247848  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:46.248337  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:46.248358  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:46.248525  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:28:46.248674  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:46.248798  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:28:46.248903  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:28:46.259524  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40283
	I0729 13:28:46.259902  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:28:46.260306  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:28:46.260328  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:28:46.260691  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:28:46.260872  992950 main.go:141] libmachine: (ha-104111) Calling .GetState
	I0729 13:28:46.262289  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:28:46.262543  992950 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 13:28:46.262560  992950 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 13:28:46.262577  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:28:46.265345  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:46.265755  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:28:46.265780  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:28:46.265937  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:28:46.266121  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:28:46.266267  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:28:46.266406  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:28:46.392436  992950 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 13:28:46.409069  992950 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 13:28:46.459119  992950 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 13:28:46.874034  992950 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 13:28:47.023431  992950 main.go:141] libmachine: Making call to close driver server
	I0729 13:28:47.023456  992950 main.go:141] libmachine: (ha-104111) Calling .Close
	I0729 13:28:47.023494  992950 main.go:141] libmachine: Making call to close driver server
	I0729 13:28:47.023517  992950 main.go:141] libmachine: (ha-104111) Calling .Close
	I0729 13:28:47.023752  992950 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:28:47.023768  992950 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:28:47.023778  992950 main.go:141] libmachine: Making call to close driver server
	I0729 13:28:47.023806  992950 main.go:141] libmachine: (ha-104111) Calling .Close
	I0729 13:28:47.023888  992950 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:28:47.023900  992950 main.go:141] libmachine: (ha-104111) DBG | Closing plugin on server side
	I0729 13:28:47.023904  992950 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:28:47.023922  992950 main.go:141] libmachine: Making call to close driver server
	I0729 13:28:47.023928  992950 main.go:141] libmachine: (ha-104111) Calling .Close
	I0729 13:28:47.024068  992950 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:28:47.024079  992950 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:28:47.024210  992950 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0729 13:28:47.024216  992950 round_trippers.go:469] Request Headers:
	I0729 13:28:47.024226  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:28:47.024239  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:28:47.024336  992950 main.go:141] libmachine: (ha-104111) DBG | Closing plugin on server side
	I0729 13:28:47.024397  992950 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:28:47.024467  992950 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:28:47.034431  992950 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0729 13:28:47.035185  992950 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0729 13:28:47.035203  992950 round_trippers.go:469] Request Headers:
	I0729 13:28:47.035211  992950 round_trippers.go:473]     Content-Type: application/json
	I0729 13:28:47.035216  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:28:47.035228  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:28:47.037982  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:28:47.038136  992950 main.go:141] libmachine: Making call to close driver server
	I0729 13:28:47.038149  992950 main.go:141] libmachine: (ha-104111) Calling .Close
	I0729 13:28:47.038377  992950 main.go:141] libmachine: Successfully made call to close driver server
	I0729 13:28:47.038394  992950 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 13:28:47.039939  992950 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0729 13:28:47.040923  992950 addons.go:510] duration metric: took 836.544864ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0729 13:28:47.040958  992950 start.go:246] waiting for cluster config update ...
	I0729 13:28:47.040973  992950 start.go:255] writing updated cluster config ...
	I0729 13:28:47.042655  992950 out.go:177] 
	I0729 13:28:47.043885  992950 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:28:47.043950  992950 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/config.json ...
	I0729 13:28:47.045507  992950 out.go:177] * Starting "ha-104111-m02" control-plane node in "ha-104111" cluster
	I0729 13:28:47.046600  992950 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:28:47.046624  992950 cache.go:56] Caching tarball of preloaded images
	I0729 13:28:47.046710  992950 preload.go:172] Found /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:28:47.046721  992950 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 13:28:47.046824  992950 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/config.json ...
	I0729 13:28:47.047024  992950 start.go:360] acquireMachinesLock for ha-104111-m02: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:28:47.047070  992950 start.go:364] duration metric: took 27.861µs to acquireMachinesLock for "ha-104111-m02"
	I0729 13:28:47.047087  992950 start.go:93] Provisioning new machine with config: &{Name:ha-104111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-104111 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:28:47.047151  992950 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0729 13:28:47.048656  992950 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 13:28:47.048750  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:28:47.048774  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:28:47.063244  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33333
	I0729 13:28:47.063627  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:28:47.064088  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:28:47.064106  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:28:47.064446  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:28:47.064680  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetMachineName
	I0729 13:28:47.064824  992950 main.go:141] libmachine: (ha-104111-m02) Calling .DriverName
	I0729 13:28:47.064990  992950 start.go:159] libmachine.API.Create for "ha-104111" (driver="kvm2")
	I0729 13:28:47.065015  992950 client.go:168] LocalClient.Create starting
	I0729 13:28:47.065059  992950 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem
	I0729 13:28:47.065096  992950 main.go:141] libmachine: Decoding PEM data...
	I0729 13:28:47.065117  992950 main.go:141] libmachine: Parsing certificate...
	I0729 13:28:47.065193  992950 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem
	I0729 13:28:47.065218  992950 main.go:141] libmachine: Decoding PEM data...
	I0729 13:28:47.065231  992950 main.go:141] libmachine: Parsing certificate...
	I0729 13:28:47.065259  992950 main.go:141] libmachine: Running pre-create checks...
	I0729 13:28:47.065270  992950 main.go:141] libmachine: (ha-104111-m02) Calling .PreCreateCheck
	I0729 13:28:47.065465  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetConfigRaw
	I0729 13:28:47.065860  992950 main.go:141] libmachine: Creating machine...
	I0729 13:28:47.065877  992950 main.go:141] libmachine: (ha-104111-m02) Calling .Create
	I0729 13:28:47.066036  992950 main.go:141] libmachine: (ha-104111-m02) Creating KVM machine...
	I0729 13:28:47.067122  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found existing default KVM network
	I0729 13:28:47.067317  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found existing private KVM network mk-ha-104111
	I0729 13:28:47.067471  992950 main.go:141] libmachine: (ha-104111-m02) Setting up store path in /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02 ...
	I0729 13:28:47.067497  992950 main.go:141] libmachine: (ha-104111-m02) Building disk image from file:///home/jenkins/minikube-integration/19338-974764/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 13:28:47.067565  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:47.067447  993329 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 13:28:47.067661  992950 main.go:141] libmachine: (ha-104111-m02) Downloading /home/jenkins/minikube-integration/19338-974764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19338-974764/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 13:28:47.329794  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:47.329677  993329 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/id_rsa...
	I0729 13:28:47.429305  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:47.429180  993329 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/ha-104111-m02.rawdisk...
	I0729 13:28:47.429340  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Writing magic tar header
	I0729 13:28:47.429356  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Writing SSH key tar header
	I0729 13:28:47.429370  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:47.429299  993329 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02 ...
	I0729 13:28:47.429472  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02
	I0729 13:28:47.429524  992950 main.go:141] libmachine: (ha-104111-m02) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02 (perms=drwx------)
	I0729 13:28:47.429537  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube/machines
	I0729 13:28:47.429555  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 13:28:47.429568  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764
	I0729 13:28:47.429582  992950 main.go:141] libmachine: (ha-104111-m02) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube/machines (perms=drwxr-xr-x)
	I0729 13:28:47.429595  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 13:28:47.429609  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Checking permissions on dir: /home/jenkins
	I0729 13:28:47.429626  992950 main.go:141] libmachine: (ha-104111-m02) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube (perms=drwxr-xr-x)
	I0729 13:28:47.429634  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Checking permissions on dir: /home
	I0729 13:28:47.429645  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Skipping /home - not owner
	I0729 13:28:47.429670  992950 main.go:141] libmachine: (ha-104111-m02) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764 (perms=drwxrwxr-x)
	I0729 13:28:47.429687  992950 main.go:141] libmachine: (ha-104111-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 13:28:47.429698  992950 main.go:141] libmachine: (ha-104111-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 13:28:47.429708  992950 main.go:141] libmachine: (ha-104111-m02) Creating domain...
	I0729 13:28:47.430594  992950 main.go:141] libmachine: (ha-104111-m02) define libvirt domain using xml: 
	I0729 13:28:47.430619  992950 main.go:141] libmachine: (ha-104111-m02) <domain type='kvm'>
	I0729 13:28:47.430630  992950 main.go:141] libmachine: (ha-104111-m02)   <name>ha-104111-m02</name>
	I0729 13:28:47.430664  992950 main.go:141] libmachine: (ha-104111-m02)   <memory unit='MiB'>2200</memory>
	I0729 13:28:47.430688  992950 main.go:141] libmachine: (ha-104111-m02)   <vcpu>2</vcpu>
	I0729 13:28:47.430695  992950 main.go:141] libmachine: (ha-104111-m02)   <features>
	I0729 13:28:47.430706  992950 main.go:141] libmachine: (ha-104111-m02)     <acpi/>
	I0729 13:28:47.430715  992950 main.go:141] libmachine: (ha-104111-m02)     <apic/>
	I0729 13:28:47.430737  992950 main.go:141] libmachine: (ha-104111-m02)     <pae/>
	I0729 13:28:47.430758  992950 main.go:141] libmachine: (ha-104111-m02)     
	I0729 13:28:47.430792  992950 main.go:141] libmachine: (ha-104111-m02)   </features>
	I0729 13:28:47.430817  992950 main.go:141] libmachine: (ha-104111-m02)   <cpu mode='host-passthrough'>
	I0729 13:28:47.430830  992950 main.go:141] libmachine: (ha-104111-m02)   
	I0729 13:28:47.430847  992950 main.go:141] libmachine: (ha-104111-m02)   </cpu>
	I0729 13:28:47.430859  992950 main.go:141] libmachine: (ha-104111-m02)   <os>
	I0729 13:28:47.430867  992950 main.go:141] libmachine: (ha-104111-m02)     <type>hvm</type>
	I0729 13:28:47.430997  992950 main.go:141] libmachine: (ha-104111-m02)     <boot dev='cdrom'/>
	I0729 13:28:47.431039  992950 main.go:141] libmachine: (ha-104111-m02)     <boot dev='hd'/>
	I0729 13:28:47.431054  992950 main.go:141] libmachine: (ha-104111-m02)     <bootmenu enable='no'/>
	I0729 13:28:47.431061  992950 main.go:141] libmachine: (ha-104111-m02)   </os>
	I0729 13:28:47.431070  992950 main.go:141] libmachine: (ha-104111-m02)   <devices>
	I0729 13:28:47.431082  992950 main.go:141] libmachine: (ha-104111-m02)     <disk type='file' device='cdrom'>
	I0729 13:28:47.431097  992950 main.go:141] libmachine: (ha-104111-m02)       <source file='/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/boot2docker.iso'/>
	I0729 13:28:47.431115  992950 main.go:141] libmachine: (ha-104111-m02)       <target dev='hdc' bus='scsi'/>
	I0729 13:28:47.431135  992950 main.go:141] libmachine: (ha-104111-m02)       <readonly/>
	I0729 13:28:47.431145  992950 main.go:141] libmachine: (ha-104111-m02)     </disk>
	I0729 13:28:47.431155  992950 main.go:141] libmachine: (ha-104111-m02)     <disk type='file' device='disk'>
	I0729 13:28:47.431169  992950 main.go:141] libmachine: (ha-104111-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 13:28:47.431192  992950 main.go:141] libmachine: (ha-104111-m02)       <source file='/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/ha-104111-m02.rawdisk'/>
	I0729 13:28:47.431210  992950 main.go:141] libmachine: (ha-104111-m02)       <target dev='hda' bus='virtio'/>
	I0729 13:28:47.431219  992950 main.go:141] libmachine: (ha-104111-m02)     </disk>
	I0729 13:28:47.431228  992950 main.go:141] libmachine: (ha-104111-m02)     <interface type='network'>
	I0729 13:28:47.431241  992950 main.go:141] libmachine: (ha-104111-m02)       <source network='mk-ha-104111'/>
	I0729 13:28:47.431251  992950 main.go:141] libmachine: (ha-104111-m02)       <model type='virtio'/>
	I0729 13:28:47.431264  992950 main.go:141] libmachine: (ha-104111-m02)     </interface>
	I0729 13:28:47.431285  992950 main.go:141] libmachine: (ha-104111-m02)     <interface type='network'>
	I0729 13:28:47.431298  992950 main.go:141] libmachine: (ha-104111-m02)       <source network='default'/>
	I0729 13:28:47.431307  992950 main.go:141] libmachine: (ha-104111-m02)       <model type='virtio'/>
	I0729 13:28:47.431332  992950 main.go:141] libmachine: (ha-104111-m02)     </interface>
	I0729 13:28:47.431354  992950 main.go:141] libmachine: (ha-104111-m02)     <serial type='pty'>
	I0729 13:28:47.431387  992950 main.go:141] libmachine: (ha-104111-m02)       <target port='0'/>
	I0729 13:28:47.431407  992950 main.go:141] libmachine: (ha-104111-m02)     </serial>
	I0729 13:28:47.431437  992950 main.go:141] libmachine: (ha-104111-m02)     <console type='pty'>
	I0729 13:28:47.431459  992950 main.go:141] libmachine: (ha-104111-m02)       <target type='serial' port='0'/>
	I0729 13:28:47.431469  992950 main.go:141] libmachine: (ha-104111-m02)     </console>
	I0729 13:28:47.431479  992950 main.go:141] libmachine: (ha-104111-m02)     <rng model='virtio'>
	I0729 13:28:47.431490  992950 main.go:141] libmachine: (ha-104111-m02)       <backend model='random'>/dev/random</backend>
	I0729 13:28:47.431498  992950 main.go:141] libmachine: (ha-104111-m02)     </rng>
	I0729 13:28:47.431509  992950 main.go:141] libmachine: (ha-104111-m02)     
	I0729 13:28:47.431527  992950 main.go:141] libmachine: (ha-104111-m02)     
	I0729 13:28:47.431537  992950 main.go:141] libmachine: (ha-104111-m02)   </devices>
	I0729 13:28:47.431546  992950 main.go:141] libmachine: (ha-104111-m02) </domain>
	I0729 13:28:47.431554  992950 main.go:141] libmachine: (ha-104111-m02) 
	I0729 13:28:47.437835  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:64:b7:88 in network default
	I0729 13:28:47.438408  992950 main.go:141] libmachine: (ha-104111-m02) Ensuring networks are active...
	I0729 13:28:47.438428  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:28:47.439098  992950 main.go:141] libmachine: (ha-104111-m02) Ensuring network default is active
	I0729 13:28:47.439426  992950 main.go:141] libmachine: (ha-104111-m02) Ensuring network mk-ha-104111 is active
	I0729 13:28:47.439808  992950 main.go:141] libmachine: (ha-104111-m02) Getting domain xml...
	I0729 13:28:47.440645  992950 main.go:141] libmachine: (ha-104111-m02) Creating domain...
	I0729 13:28:47.750199  992950 main.go:141] libmachine: (ha-104111-m02) Waiting to get IP...
	I0729 13:28:47.751151  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:28:47.751536  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find current IP address of domain ha-104111-m02 in network mk-ha-104111
	I0729 13:28:47.751564  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:47.751502  993329 retry.go:31] will retry after 191.367372ms: waiting for machine to come up
	I0729 13:28:47.945071  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:28:47.945535  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find current IP address of domain ha-104111-m02 in network mk-ha-104111
	I0729 13:28:47.945568  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:47.945487  993329 retry.go:31] will retry after 272.868972ms: waiting for machine to come up
	I0729 13:28:48.220189  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:28:48.220776  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find current IP address of domain ha-104111-m02 in network mk-ha-104111
	I0729 13:28:48.220809  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:48.220718  993329 retry.go:31] will retry after 480.381516ms: waiting for machine to come up
	I0729 13:28:48.702452  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:28:48.702934  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find current IP address of domain ha-104111-m02 in network mk-ha-104111
	I0729 13:28:48.702963  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:48.702890  993329 retry.go:31] will retry after 576.409222ms: waiting for machine to come up
	I0729 13:28:49.281103  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:28:49.281583  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find current IP address of domain ha-104111-m02 in network mk-ha-104111
	I0729 13:28:49.281613  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:49.281510  993329 retry.go:31] will retry after 759.907393ms: waiting for machine to come up
	I0729 13:28:50.043627  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:28:50.044116  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find current IP address of domain ha-104111-m02 in network mk-ha-104111
	I0729 13:28:50.044147  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:50.044078  993329 retry.go:31] will retry after 919.552774ms: waiting for machine to come up
	I0729 13:28:50.965536  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:28:50.966009  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find current IP address of domain ha-104111-m02 in network mk-ha-104111
	I0729 13:28:50.966054  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:50.965947  993329 retry.go:31] will retry after 856.019302ms: waiting for machine to come up
	I0729 13:28:51.824292  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:28:51.824800  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find current IP address of domain ha-104111-m02 in network mk-ha-104111
	I0729 13:28:51.824833  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:51.824742  993329 retry.go:31] will retry after 1.346244961s: waiting for machine to come up
	I0729 13:28:53.172719  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:28:53.173148  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find current IP address of domain ha-104111-m02 in network mk-ha-104111
	I0729 13:28:53.173179  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:53.173086  993329 retry.go:31] will retry after 1.765358776s: waiting for machine to come up
	I0729 13:28:54.941248  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:28:54.941718  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find current IP address of domain ha-104111-m02 in network mk-ha-104111
	I0729 13:28:54.941744  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:54.941682  993329 retry.go:31] will retry after 1.601671877s: waiting for machine to come up
	I0729 13:28:56.545651  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:28:56.546123  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find current IP address of domain ha-104111-m02 in network mk-ha-104111
	I0729 13:28:56.546181  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:56.546061  993329 retry.go:31] will retry after 2.533098194s: waiting for machine to come up
	I0729 13:28:59.082270  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:28:59.082757  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find current IP address of domain ha-104111-m02 in network mk-ha-104111
	I0729 13:28:59.082790  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:28:59.082716  993329 retry.go:31] will retry after 2.913309526s: waiting for machine to come up
	I0729 13:29:01.999738  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:02.000103  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find current IP address of domain ha-104111-m02 in network mk-ha-104111
	I0729 13:29:02.000131  992950 main.go:141] libmachine: (ha-104111-m02) DBG | I0729 13:29:02.000056  993329 retry.go:31] will retry after 3.778820645s: waiting for machine to come up
	I0729 13:29:05.780608  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:05.780979  992950 main.go:141] libmachine: (ha-104111-m02) Found IP for machine: 192.168.39.140
	I0729 13:29:05.780999  992950 main.go:141] libmachine: (ha-104111-m02) Reserving static IP address...
	I0729 13:29:05.781010  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has current primary IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:05.781302  992950 main.go:141] libmachine: (ha-104111-m02) DBG | unable to find host DHCP lease matching {name: "ha-104111-m02", mac: "52:54:00:5b:c5:02", ip: "192.168.39.140"} in network mk-ha-104111
	I0729 13:29:05.854419  992950 main.go:141] libmachine: (ha-104111-m02) Reserved static IP address: 192.168.39.140
	I0729 13:29:05.854454  992950 main.go:141] libmachine: (ha-104111-m02) Waiting for SSH to be available...
	I0729 13:29:05.854464  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Getting to WaitForSSH function...
	I0729 13:29:05.857521  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:05.857946  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:05.857978  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:05.858107  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Using SSH client type: external
	I0729 13:29:05.858127  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/id_rsa (-rw-------)
	I0729 13:29:05.858171  992950 main.go:141] libmachine: (ha-104111-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:29:05.858183  992950 main.go:141] libmachine: (ha-104111-m02) DBG | About to run SSH command:
	I0729 13:29:05.858219  992950 main.go:141] libmachine: (ha-104111-m02) DBG | exit 0
	I0729 13:29:05.984234  992950 main.go:141] libmachine: (ha-104111-m02) DBG | SSH cmd err, output: <nil>: 
	I0729 13:29:05.984527  992950 main.go:141] libmachine: (ha-104111-m02) KVM machine creation complete!
	I0729 13:29:05.984865  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetConfigRaw
	I0729 13:29:05.985414  992950 main.go:141] libmachine: (ha-104111-m02) Calling .DriverName
	I0729 13:29:05.985604  992950 main.go:141] libmachine: (ha-104111-m02) Calling .DriverName
	I0729 13:29:05.985738  992950 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 13:29:05.985755  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetState
	I0729 13:29:05.986986  992950 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 13:29:05.987005  992950 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 13:29:05.987014  992950 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 13:29:05.987023  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:29:05.990681  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:05.991066  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:05.991086  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:05.991249  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:29:05.991425  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:05.991583  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:05.991688  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:29:05.991837  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:29:05.992107  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0729 13:29:05.992118  992950 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 13:29:06.100529  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:29:06.100561  992950 main.go:141] libmachine: Detecting the provisioner...
	I0729 13:29:06.100578  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:29:06.103008  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.103401  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:06.103433  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.103611  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:29:06.103805  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:06.103949  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:06.104075  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:29:06.104226  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:29:06.104503  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0729 13:29:06.104520  992950 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 13:29:06.213429  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 13:29:06.213514  992950 main.go:141] libmachine: found compatible host: buildroot
	I0729 13:29:06.213529  992950 main.go:141] libmachine: Provisioning with buildroot...
	I0729 13:29:06.213541  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetMachineName
	I0729 13:29:06.213823  992950 buildroot.go:166] provisioning hostname "ha-104111-m02"
	I0729 13:29:06.213847  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetMachineName
	I0729 13:29:06.214043  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:29:06.216778  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.217150  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:06.217174  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.217350  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:29:06.217542  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:06.217760  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:06.217906  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:29:06.218043  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:29:06.218265  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0729 13:29:06.218286  992950 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-104111-m02 && echo "ha-104111-m02" | sudo tee /etc/hostname
	I0729 13:29:06.340599  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-104111-m02
	
	I0729 13:29:06.340626  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:29:06.343542  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.344070  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:06.344101  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.344290  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:29:06.344550  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:06.344742  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:06.344881  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:29:06.345077  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:29:06.345298  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0729 13:29:06.345321  992950 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-104111-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-104111-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-104111-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:29:06.461425  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:29:06.461462  992950 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 13:29:06.461486  992950 buildroot.go:174] setting up certificates
	I0729 13:29:06.461499  992950 provision.go:84] configureAuth start
	I0729 13:29:06.461512  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetMachineName
	I0729 13:29:06.461879  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetIP
	I0729 13:29:06.465014  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.465418  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:06.465450  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.465662  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:29:06.467921  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.468248  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:06.468284  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.468430  992950 provision.go:143] copyHostCerts
	I0729 13:29:06.468465  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 13:29:06.468501  992950 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 13:29:06.468510  992950 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 13:29:06.468575  992950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 13:29:06.468663  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 13:29:06.468681  992950 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 13:29:06.468687  992950 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 13:29:06.468710  992950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 13:29:06.468803  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 13:29:06.468825  992950 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 13:29:06.468829  992950 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 13:29:06.468853  992950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 13:29:06.468905  992950 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.ha-104111-m02 san=[127.0.0.1 192.168.39.140 ha-104111-m02 localhost minikube]
	I0729 13:29:06.553276  992950 provision.go:177] copyRemoteCerts
	I0729 13:29:06.553338  992950 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:29:06.553366  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:29:06.555888  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.556162  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:06.556193  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.556369  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:29:06.556573  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:06.556758  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:29:06.556905  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/id_rsa Username:docker}
	I0729 13:29:06.642853  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 13:29:06.642954  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:29:06.667139  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 13:29:06.667222  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:29:06.691905  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 13:29:06.691968  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 13:29:06.716659  992950 provision.go:87] duration metric: took 255.146179ms to configureAuth
	I0729 13:29:06.716685  992950 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:29:06.716850  992950 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:29:06.716926  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:29:06.719548  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.719920  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:06.719947  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.720091  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:29:06.720306  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:06.720517  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:06.720679  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:29:06.720883  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:29:06.721105  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0729 13:29:06.721123  992950 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:29:06.993658  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:29:06.993689  992950 main.go:141] libmachine: Checking connection to Docker...
	I0729 13:29:06.993697  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetURL
	I0729 13:29:06.995058  992950 main.go:141] libmachine: (ha-104111-m02) DBG | Using libvirt version 6000000
	I0729 13:29:06.997040  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.997448  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:06.997476  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:06.997663  992950 main.go:141] libmachine: Docker is up and running!
	I0729 13:29:06.997677  992950 main.go:141] libmachine: Reticulating splines...
	I0729 13:29:06.997684  992950 client.go:171] duration metric: took 19.932661185s to LocalClient.Create
	I0729 13:29:06.997708  992950 start.go:167] duration metric: took 19.932717613s to libmachine.API.Create "ha-104111"
	I0729 13:29:06.997720  992950 start.go:293] postStartSetup for "ha-104111-m02" (driver="kvm2")
	I0729 13:29:06.997729  992950 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:29:06.997755  992950 main.go:141] libmachine: (ha-104111-m02) Calling .DriverName
	I0729 13:29:06.998006  992950 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:29:06.998031  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:29:06.999979  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:07.000356  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:07.000380  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:07.000539  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:29:07.000736  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:07.000892  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:29:07.001083  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/id_rsa Username:docker}
	I0729 13:29:07.088492  992950 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:29:07.093011  992950 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:29:07.093043  992950 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 13:29:07.093122  992950 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 13:29:07.093220  992950 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 13:29:07.093234  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> /etc/ssl/certs/9820462.pem
	I0729 13:29:07.093321  992950 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:29:07.104263  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 13:29:07.130043  992950 start.go:296] duration metric: took 132.308837ms for postStartSetup
	I0729 13:29:07.130102  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetConfigRaw
	I0729 13:29:07.130836  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetIP
	I0729 13:29:07.133474  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:07.133858  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:07.133885  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:07.134118  992950 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/config.json ...
	I0729 13:29:07.134330  992950 start.go:128] duration metric: took 20.087167452s to createHost
	I0729 13:29:07.134356  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:29:07.136755  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:07.137085  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:07.137110  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:07.137261  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:29:07.137506  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:07.137677  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:07.137825  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:29:07.138015  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:29:07.138220  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0729 13:29:07.138232  992950 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:29:07.245283  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722259747.216982635
	
	I0729 13:29:07.245313  992950 fix.go:216] guest clock: 1722259747.216982635
	I0729 13:29:07.245323  992950 fix.go:229] Guest: 2024-07-29 13:29:07.216982635 +0000 UTC Remote: 2024-07-29 13:29:07.134343214 +0000 UTC m=+76.380347522 (delta=82.639421ms)
	I0729 13:29:07.245346  992950 fix.go:200] guest clock delta is within tolerance: 82.639421ms
	I0729 13:29:07.245354  992950 start.go:83] releasing machines lock for "ha-104111-m02", held for 20.198273996s
	I0729 13:29:07.245378  992950 main.go:141] libmachine: (ha-104111-m02) Calling .DriverName
	I0729 13:29:07.245718  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetIP
	I0729 13:29:07.248734  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:07.249103  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:07.249128  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:07.251648  992950 out.go:177] * Found network options:
	I0729 13:29:07.253064  992950 out.go:177]   - NO_PROXY=192.168.39.120
	W0729 13:29:07.254398  992950 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 13:29:07.254435  992950 main.go:141] libmachine: (ha-104111-m02) Calling .DriverName
	I0729 13:29:07.254959  992950 main.go:141] libmachine: (ha-104111-m02) Calling .DriverName
	I0729 13:29:07.255154  992950 main.go:141] libmachine: (ha-104111-m02) Calling .DriverName
	I0729 13:29:07.255272  992950 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:29:07.255317  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	W0729 13:29:07.255345  992950 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 13:29:07.255418  992950 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:29:07.255435  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:29:07.257934  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:07.258162  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:07.258300  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:07.258327  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:07.258459  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:29:07.258526  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:07.258550  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:07.258644  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:07.258731  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:29:07.258803  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:29:07.258863  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:29:07.258922  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/id_rsa Username:docker}
	I0729 13:29:07.258956  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:29:07.259119  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/id_rsa Username:docker}
	I0729 13:29:07.491891  992950 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:29:07.498637  992950 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:29:07.498729  992950 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:29:07.515737  992950 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:29:07.515786  992950 start.go:495] detecting cgroup driver to use...
	I0729 13:29:07.515853  992950 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:29:07.536462  992950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:29:07.550741  992950 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:29:07.550824  992950 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:29:07.565215  992950 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:29:07.578745  992950 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:29:07.689384  992950 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:29:07.830050  992950 docker.go:233] disabling docker service ...
	I0729 13:29:07.830141  992950 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:29:07.844716  992950 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:29:07.857689  992950 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:29:07.986082  992950 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:29:08.114810  992950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:29:08.128510  992950 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:29:08.147463  992950 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:29:08.147531  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:29:08.159873  992950 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:29:08.159945  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:29:08.170990  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:29:08.181899  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:29:08.192355  992950 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:29:08.203362  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:29:08.214072  992950 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:29:08.230996  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:29:08.241657  992950 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:29:08.251180  992950 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:29:08.251241  992950 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:29:08.264803  992950 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:29:08.274843  992950 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:29:08.393478  992950 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:29:08.527762  992950 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:29:08.527851  992950 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:29:08.532685  992950 start.go:563] Will wait 60s for crictl version
	I0729 13:29:08.532744  992950 ssh_runner.go:195] Run: which crictl
	I0729 13:29:08.536759  992950 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:29:08.574605  992950 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:29:08.574705  992950 ssh_runner.go:195] Run: crio --version
	I0729 13:29:08.602122  992950 ssh_runner.go:195] Run: crio --version
	I0729 13:29:08.631758  992950 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:29:08.633116  992950 out.go:177]   - env NO_PROXY=192.168.39.120
	I0729 13:29:08.634529  992950 main.go:141] libmachine: (ha-104111-m02) Calling .GetIP
	I0729 13:29:08.637259  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:08.637577  992950 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:29:00 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:29:08.637610  992950 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:29:08.637821  992950 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 13:29:08.642064  992950 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:29:08.654465  992950 mustload.go:65] Loading cluster: ha-104111
	I0729 13:29:08.654680  992950 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:29:08.654950  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:29:08.654991  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:29:08.669860  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42007
	I0729 13:29:08.670308  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:29:08.670780  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:29:08.670812  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:29:08.671178  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:29:08.671394  992950 main.go:141] libmachine: (ha-104111) Calling .GetState
	I0729 13:29:08.672965  992950 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:29:08.673256  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:29:08.673290  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:29:08.687865  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46771
	I0729 13:29:08.688269  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:29:08.688766  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:29:08.688790  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:29:08.689117  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:29:08.689306  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:29:08.689469  992950 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111 for IP: 192.168.39.140
	I0729 13:29:08.689478  992950 certs.go:194] generating shared ca certs ...
	I0729 13:29:08.689498  992950 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:29:08.689650  992950 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 13:29:08.689701  992950 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 13:29:08.689714  992950 certs.go:256] generating profile certs ...
	I0729 13:29:08.689814  992950 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.key
	I0729 13:29:08.689847  992950 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.3ed82b7a
	I0729 13:29:08.689867  992950 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.3ed82b7a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.120 192.168.39.140 192.168.39.254]
	I0729 13:29:08.893797  992950 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.3ed82b7a ...
	I0729 13:29:08.893826  992950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.3ed82b7a: {Name:mk739a7714392be88871b57878d3f430f8a41e53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:29:08.894019  992950 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.3ed82b7a ...
	I0729 13:29:08.894038  992950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.3ed82b7a: {Name:mkc92c21f4500c5c2f144d8589021c12a3ab62a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:29:08.894140  992950 certs.go:381] copying /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.3ed82b7a -> /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt
	I0729 13:29:08.894313  992950 certs.go:385] copying /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.3ed82b7a -> /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key
	I0729 13:29:08.894497  992950 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key
	I0729 13:29:08.894516  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 13:29:08.894534  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 13:29:08.894558  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 13:29:08.894578  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 13:29:08.894594  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 13:29:08.894608  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 13:29:08.894625  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 13:29:08.894641  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 13:29:08.894707  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 13:29:08.894753  992950 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 13:29:08.894766  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:29:08.894799  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:29:08.894830  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:29:08.894857  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 13:29:08.894914  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 13:29:08.894955  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem -> /usr/share/ca-certificates/982046.pem
	I0729 13:29:08.894971  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> /usr/share/ca-certificates/9820462.pem
	I0729 13:29:08.894988  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:29:08.895030  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:29:08.897907  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:29:08.898331  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:29:08.898354  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:29:08.898587  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:29:08.898811  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:29:08.899002  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:29:08.899153  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:29:08.976740  992950 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 13:29:08.982083  992950 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 13:29:08.993686  992950 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 13:29:08.998010  992950 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0729 13:29:09.008443  992950 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 13:29:09.012522  992950 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 13:29:09.023131  992950 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 13:29:09.027483  992950 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0729 13:29:09.038049  992950 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 13:29:09.042195  992950 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 13:29:09.052369  992950 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 13:29:09.057855  992950 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0729 13:29:09.069736  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:29:09.098073  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:29:09.125847  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:29:09.152988  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 13:29:09.177037  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 13:29:09.200149  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 13:29:09.223523  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:29:09.247715  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 13:29:09.273947  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 13:29:09.297427  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 13:29:09.321118  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:29:09.344732  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 13:29:09.361476  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0729 13:29:09.378190  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 13:29:09.395388  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0729 13:29:09.411825  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 13:29:09.428546  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0729 13:29:09.445345  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 13:29:09.461913  992950 ssh_runner.go:195] Run: openssl version
	I0729 13:29:09.468134  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 13:29:09.479322  992950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 13:29:09.483952  992950 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 13:29:09.484007  992950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 13:29:09.490251  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:29:09.501522  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:29:09.512296  992950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:29:09.516983  992950 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:29:09.517048  992950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:29:09.522743  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:29:09.533299  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 13:29:09.544513  992950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 13:29:09.549432  992950 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 13:29:09.549491  992950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 13:29:09.555587  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 13:29:09.566942  992950 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:29:09.571255  992950 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 13:29:09.571309  992950 kubeadm.go:934] updating node {m02 192.168.39.140 8443 v1.30.3 crio true true} ...
	I0729 13:29:09.571404  992950 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-104111-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-104111 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:29:09.571442  992950 kube-vip.go:115] generating kube-vip config ...
	I0729 13:29:09.571491  992950 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 13:29:09.590660  992950 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 13:29:09.590741  992950 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 13:29:09.590798  992950 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 13:29:09.601992  992950 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 13:29:09.602060  992950 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 13:29:09.612289  992950 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 13:29:09.612333  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 13:29:09.612405  992950 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 13:29:09.612422  992950 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0729 13:29:09.612435  992950 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0729 13:29:09.617589  992950 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 13:29:09.617614  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 13:29:10.236509  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 13:29:10.236608  992950 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 13:29:10.241897  992950 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 13:29:10.241938  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 13:29:12.216934  992950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:29:12.231497  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 13:29:12.231600  992950 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 13:29:12.236175  992950 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 13:29:12.236353  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 13:29:12.630248  992950 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 13:29:12.640712  992950 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0729 13:29:12.658075  992950 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:29:12.675634  992950 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 13:29:12.692508  992950 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 13:29:12.696397  992950 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:29:12.709569  992950 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:29:12.830472  992950 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:29:12.847286  992950 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:29:12.847749  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:29:12.847800  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:29:12.863991  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39045
	I0729 13:29:12.864480  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:29:12.864993  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:29:12.865019  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:29:12.865320  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:29:12.865554  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:29:12.865718  992950 start.go:317] joinCluster: &{Name:ha-104111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-104111 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:29:12.865854  992950 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 13:29:12.865883  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:29:12.868629  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:29:12.869103  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:29:12.869132  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:29:12.869217  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:29:12.869380  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:29:12.869521  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:29:12.869684  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:29:13.025199  992950 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:29:13.025267  992950 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token g4sw7y.9eyyh608n7bqq2vd --discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-104111-m02 --control-plane --apiserver-advertise-address=192.168.39.140 --apiserver-bind-port=8443"
	I0729 13:29:35.403263  992950 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token g4sw7y.9eyyh608n7bqq2vd --discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-104111-m02 --control-plane --apiserver-advertise-address=192.168.39.140 --apiserver-bind-port=8443": (22.377961362s)
	I0729 13:29:35.403309  992950 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 13:29:35.891226  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-104111-m02 minikube.k8s.io/updated_at=2024_07_29T13_29_35_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411 minikube.k8s.io/name=ha-104111 minikube.k8s.io/primary=false
	I0729 13:29:36.032621  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-104111-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 13:29:36.132525  992950 start.go:319] duration metric: took 23.266803925s to joinCluster
	I0729 13:29:36.132625  992950 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:29:36.132954  992950 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:29:36.134359  992950 out.go:177] * Verifying Kubernetes components...
	I0729 13:29:36.135951  992950 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:29:36.386814  992950 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:29:36.440763  992950 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 13:29:36.441143  992950 kapi.go:59] client config for ha-104111: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.crt", KeyFile:"/home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.key", CAFile:"/home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 13:29:36.441215  992950 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.120:8443
	I0729 13:29:36.441439  992950 node_ready.go:35] waiting up to 6m0s for node "ha-104111-m02" to be "Ready" ...
	I0729 13:29:36.441538  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:36.441548  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:36.441555  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:36.441558  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:36.453383  992950 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0729 13:29:36.942126  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:36.942148  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:36.942156  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:36.942160  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:36.946804  992950 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 13:29:37.442473  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:37.442502  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:37.442518  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:37.442524  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:37.449273  992950 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 13:29:37.941680  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:37.941707  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:37.941718  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:37.941723  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:37.946410  992950 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 13:29:38.441860  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:38.441886  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:38.441893  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:38.441896  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:38.445658  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:38.446546  992950 node_ready.go:53] node "ha-104111-m02" has status "Ready":"False"
	I0729 13:29:38.941643  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:38.941667  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:38.941676  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:38.941680  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:38.945097  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:39.441830  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:39.441861  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:39.441873  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:39.441880  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:39.445086  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:39.942088  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:39.942113  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:39.942126  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:39.942131  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:39.945529  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:40.442556  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:40.442589  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:40.442601  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:40.442608  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:40.447373  992950 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 13:29:40.448150  992950 node_ready.go:53] node "ha-104111-m02" has status "Ready":"False"
	I0729 13:29:40.942483  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:40.942508  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:40.942527  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:40.942531  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:40.945952  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:41.441875  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:41.441896  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:41.441904  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:41.441908  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:41.447347  992950 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 13:29:41.942246  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:41.942269  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:41.942277  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:41.942282  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:41.945475  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:42.441733  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:42.441757  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:42.441766  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:42.441771  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:42.445901  992950 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 13:29:42.942336  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:42.942361  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:42.942373  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:42.942380  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:42.947857  992950 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 13:29:42.948451  992950 node_ready.go:53] node "ha-104111-m02" has status "Ready":"False"
	I0729 13:29:43.441690  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:43.441710  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:43.441719  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:43.441723  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:43.445013  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:43.942079  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:43.942105  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:43.942113  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:43.942117  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:43.945621  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:44.442542  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:44.442566  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:44.442575  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:44.442579  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:44.445568  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:29:44.941636  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:44.941659  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:44.941668  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:44.941672  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:44.944882  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:45.441782  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:45.441813  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:45.441830  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:45.441835  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:45.445508  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:45.446062  992950 node_ready.go:53] node "ha-104111-m02" has status "Ready":"False"
	I0729 13:29:45.942294  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:45.942321  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:45.942328  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:45.942333  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:45.945563  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:46.441709  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:46.441732  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:46.441742  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:46.441748  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:46.444966  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:46.941674  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:46.941697  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:46.941705  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:46.941709  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:46.945643  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:47.441740  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:47.441766  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:47.441777  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:47.441788  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:47.444932  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:47.942210  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:47.942233  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:47.942242  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:47.942246  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:47.945147  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:29:47.945815  992950 node_ready.go:53] node "ha-104111-m02" has status "Ready":"False"
	I0729 13:29:48.442060  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:48.442082  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:48.442091  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:48.442095  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:48.445112  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:29:48.941743  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:48.941770  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:48.941777  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:48.941781  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:48.945094  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:49.442394  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:49.442461  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:49.442484  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:49.442493  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:49.445335  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:29:49.942395  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:49.942421  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:49.942431  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:49.942436  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:49.945591  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:49.946166  992950 node_ready.go:53] node "ha-104111-m02" has status "Ready":"False"
	I0729 13:29:50.442660  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:50.442688  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:50.442699  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:50.442711  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:50.446215  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:50.942195  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:50.942218  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:50.942227  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:50.942231  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:50.945931  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:51.442429  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:51.442450  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:51.442459  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:51.442463  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:51.445117  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:29:51.941985  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:51.942010  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:51.942019  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:51.942023  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:51.945363  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:51.946002  992950 node_ready.go:49] node "ha-104111-m02" has status "Ready":"True"
	I0729 13:29:51.946028  992950 node_ready.go:38] duration metric: took 15.504572235s for node "ha-104111-m02" to be "Ready" ...
	I0729 13:29:51.946040  992950 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:29:51.946115  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods
	I0729 13:29:51.946126  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:51.946136  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:51.946141  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:51.950415  992950 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 13:29:51.957410  992950 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9jrnl" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:51.957508  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9jrnl
	I0729 13:29:51.957517  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:51.957536  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:51.957546  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:51.960363  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:29:51.961016  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:29:51.961032  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:51.961042  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:51.961049  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:51.963544  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:29:51.964082  992950 pod_ready.go:92] pod "coredns-7db6d8ff4d-9jrnl" in "kube-system" namespace has status "Ready":"True"
	I0729 13:29:51.964103  992950 pod_ready.go:81] duration metric: took 6.665172ms for pod "coredns-7db6d8ff4d-9jrnl" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:51.964113  992950 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gcf7q" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:51.964182  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gcf7q
	I0729 13:29:51.964192  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:51.964201  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:51.964210  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:51.967943  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:51.968678  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:29:51.968696  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:51.968706  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:51.968711  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:51.977108  992950 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0729 13:29:51.977654  992950 pod_ready.go:92] pod "coredns-7db6d8ff4d-gcf7q" in "kube-system" namespace has status "Ready":"True"
	I0729 13:29:51.977675  992950 pod_ready.go:81] duration metric: took 13.554914ms for pod "coredns-7db6d8ff4d-gcf7q" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:51.977684  992950 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:51.977741  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/etcd-ha-104111
	I0729 13:29:51.977748  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:51.977755  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:51.977761  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:51.980192  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:29:51.980826  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:29:51.980845  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:51.980854  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:51.980860  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:51.983293  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:29:51.983882  992950 pod_ready.go:92] pod "etcd-ha-104111" in "kube-system" namespace has status "Ready":"True"
	I0729 13:29:51.983897  992950 pod_ready.go:81] duration metric: took 6.205001ms for pod "etcd-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:51.983907  992950 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:51.983954  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/etcd-ha-104111-m02
	I0729 13:29:51.983960  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:51.983967  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:51.983973  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:51.986286  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:29:51.986880  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:51.986893  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:51.986900  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:51.986903  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:51.989049  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:29:51.989486  992950 pod_ready.go:92] pod "etcd-ha-104111-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 13:29:51.989502  992950 pod_ready.go:81] duration metric: took 5.587819ms for pod "etcd-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:51.989515  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:52.143040  992950 request.go:629] Waited for 153.461561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-104111
	I0729 13:29:52.143122  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-104111
	I0729 13:29:52.143127  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:52.143134  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:52.143140  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:52.146310  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:52.342451  992950 request.go:629] Waited for 195.396971ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:29:52.342513  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:29:52.342519  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:52.342526  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:52.342530  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:52.346220  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:52.347027  992950 pod_ready.go:92] pod "kube-apiserver-ha-104111" in "kube-system" namespace has status "Ready":"True"
	I0729 13:29:52.347048  992950 pod_ready.go:81] duration metric: took 357.523914ms for pod "kube-apiserver-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:52.347058  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:52.542203  992950 request.go:629] Waited for 195.05716ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-104111-m02
	I0729 13:29:52.542272  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-104111-m02
	I0729 13:29:52.542278  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:52.542286  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:52.542291  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:52.545662  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:52.742837  992950 request.go:629] Waited for 196.367811ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:52.742919  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:52.742924  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:52.742932  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:52.742937  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:52.746222  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:52.746921  992950 pod_ready.go:92] pod "kube-apiserver-ha-104111-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 13:29:52.746942  992950 pod_ready.go:81] duration metric: took 399.878396ms for pod "kube-apiserver-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:52.746956  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:52.942458  992950 request.go:629] Waited for 195.422762ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-104111
	I0729 13:29:52.942525  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-104111
	I0729 13:29:52.942529  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:52.942537  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:52.942542  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:52.945928  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:53.142097  992950 request.go:629] Waited for 195.306368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:29:53.142171  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:29:53.142176  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:53.142183  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:53.142189  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:53.145312  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:53.146344  992950 pod_ready.go:92] pod "kube-controller-manager-ha-104111" in "kube-system" namespace has status "Ready":"True"
	I0729 13:29:53.146368  992950 pod_ready.go:81] duration metric: took 399.402588ms for pod "kube-controller-manager-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:53.146382  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:53.342533  992950 request.go:629] Waited for 196.040014ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-104111-m02
	I0729 13:29:53.342605  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-104111-m02
	I0729 13:29:53.342611  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:53.342619  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:53.342623  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:53.346259  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:53.542404  992950 request.go:629] Waited for 195.381486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:53.542489  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:53.542502  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:53.542515  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:53.542522  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:53.545914  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:53.546477  992950 pod_ready.go:92] pod "kube-controller-manager-ha-104111-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 13:29:53.546500  992950 pod_ready.go:81] duration metric: took 400.109056ms for pod "kube-controller-manager-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:53.546514  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5dnvv" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:53.742509  992950 request.go:629] Waited for 195.89347ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5dnvv
	I0729 13:29:53.742598  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5dnvv
	I0729 13:29:53.742607  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:53.742619  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:53.742628  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:53.745882  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:53.942973  992950 request.go:629] Waited for 196.370167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:53.943055  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:53.943060  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:53.943067  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:53.943071  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:53.946517  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:53.947342  992950 pod_ready.go:92] pod "kube-proxy-5dnvv" in "kube-system" namespace has status "Ready":"True"
	I0729 13:29:53.947361  992950 pod_ready.go:81] duration metric: took 400.840279ms for pod "kube-proxy-5dnvv" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:53.947370  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n6kkf" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:54.142565  992950 request.go:629] Waited for 195.109125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n6kkf
	I0729 13:29:54.142651  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n6kkf
	I0729 13:29:54.142655  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:54.142664  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:54.142668  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:54.145792  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:54.342884  992950 request.go:629] Waited for 196.379896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:29:54.342946  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:29:54.342951  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:54.342958  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:54.342963  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:54.346249  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:54.346731  992950 pod_ready.go:92] pod "kube-proxy-n6kkf" in "kube-system" namespace has status "Ready":"True"
	I0729 13:29:54.346749  992950 pod_ready.go:81] duration metric: took 399.373512ms for pod "kube-proxy-n6kkf" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:54.346757  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:54.542900  992950 request.go:629] Waited for 196.035828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-104111
	I0729 13:29:54.542972  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-104111
	I0729 13:29:54.542981  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:54.542992  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:54.543002  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:54.546363  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:54.742656  992950 request.go:629] Waited for 195.385036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:29:54.742740  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:29:54.742747  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:54.742759  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:54.742765  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:54.746133  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:54.746617  992950 pod_ready.go:92] pod "kube-scheduler-ha-104111" in "kube-system" namespace has status "Ready":"True"
	I0729 13:29:54.746640  992950 pod_ready.go:81] duration metric: took 399.87386ms for pod "kube-scheduler-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:54.746651  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:54.942799  992950 request.go:629] Waited for 196.059566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-104111-m02
	I0729 13:29:54.942866  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-104111-m02
	I0729 13:29:54.942871  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:54.942880  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:54.942884  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:54.945978  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:55.142826  992950 request.go:629] Waited for 196.3718ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:55.142906  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:29:55.142911  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:55.142919  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:55.142933  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:55.146281  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:55.146738  992950 pod_ready.go:92] pod "kube-scheduler-ha-104111-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 13:29:55.146757  992950 pod_ready.go:81] duration metric: took 400.09936ms for pod "kube-scheduler-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:29:55.146767  992950 pod_ready.go:38] duration metric: took 3.200713503s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:29:55.146784  992950 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:29:55.146835  992950 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:29:55.162380  992950 api_server.go:72] duration metric: took 19.029713385s to wait for apiserver process to appear ...
	I0729 13:29:55.162408  992950 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:29:55.162429  992950 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 13:29:55.167569  992950 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I0729 13:29:55.167634  992950 round_trippers.go:463] GET https://192.168.39.120:8443/version
	I0729 13:29:55.167639  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:55.167646  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:55.167652  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:55.168457  992950 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 13:29:55.168565  992950 api_server.go:141] control plane version: v1.30.3
	I0729 13:29:55.168583  992950 api_server.go:131] duration metric: took 6.169505ms to wait for apiserver health ...
	I0729 13:29:55.168591  992950 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:29:55.343027  992950 request.go:629] Waited for 174.355466ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods
	I0729 13:29:55.343108  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods
	I0729 13:29:55.343114  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:55.343122  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:55.343126  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:55.348390  992950 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 13:29:55.352540  992950 system_pods.go:59] 17 kube-system pods found
	I0729 13:29:55.352581  992950 system_pods.go:61] "coredns-7db6d8ff4d-9jrnl" [0453ed97-efb4-41c1-8bfb-e7e004e618e0] Running
	I0729 13:29:55.352587  992950 system_pods.go:61] "coredns-7db6d8ff4d-gcf7q" [196981ba-ed16-427c-ae8b-9b7e8ff36be2] Running
	I0729 13:29:55.352592  992950 system_pods.go:61] "etcd-ha-104111" [309561db-8f30-4b42-8252-e02d9a26ec2e] Running
	I0729 13:29:55.352595  992950 system_pods.go:61] "etcd-ha-104111-m02" [4f09acca-1baa-4eba-8ef4-eb3e2b64512c] Running
	I0729 13:29:55.352598  992950 system_pods.go:61] "kindnet-9phpm" [60e9c45f-5176-492e-90c7-49b0201afe1e] Running
	I0729 13:29:55.352601  992950 system_pods.go:61] "kindnet-njndz" [a0477f9b-b1ff-49d8-8f39-21ffb84377e9] Running
	I0729 13:29:55.352605  992950 system_pods.go:61] "kube-apiserver-ha-104111" [d546ecd4-9bdb-4e41-9e4a-74d0c81359d5] Running
	I0729 13:29:55.352608  992950 system_pods.go:61] "kube-apiserver-ha-104111-m02" [70bd608c-3ebe-4306-8ec9-61c254ca5261] Running
	I0729 13:29:55.352611  992950 system_pods.go:61] "kube-controller-manager-ha-104111" [03be8232-ff90-43e1-87e0-5d61aeaa7c96] Running
	I0729 13:29:55.352615  992950 system_pods.go:61] "kube-controller-manager-ha-104111-m02" [d2ca4758-3c38-4655-8bfb-b5a64b0b6bca] Running
	I0729 13:29:55.352618  992950 system_pods.go:61] "kube-proxy-5dnvv" [2fb3553e-b114-4528-bf9a-1765356bb2a4] Running
	I0729 13:29:55.352623  992950 system_pods.go:61] "kube-proxy-n6kkf" [4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1] Running
	I0729 13:29:55.352627  992950 system_pods.go:61] "kube-scheduler-ha-104111" [3236068e-5891-4cb7-aa91-8aaf93260f3a] Running
	I0729 13:29:55.352630  992950 system_pods.go:61] "kube-scheduler-ha-104111-m02" [01a2d6c2-859d-44e8-9d53-0d257b4b4a1c] Running
	I0729 13:29:55.352633  992950 system_pods.go:61] "kube-vip-ha-104111" [edfeb506-2884-4406-92cf-c35fce56d7c4] Running
	I0729 13:29:55.352636  992950 system_pods.go:61] "kube-vip-ha-104111-m02" [bcc970d3-1717-4971-8216-7526fe2028ba] Running
	I0729 13:29:55.352639  992950 system_pods.go:61] "storage-provisioner" [b61cc52e-771b-484a-99d6-8963665cb1e8] Running
	I0729 13:29:55.352648  992950 system_pods.go:74] duration metric: took 184.048689ms to wait for pod list to return data ...
	I0729 13:29:55.352659  992950 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:29:55.542048  992950 request.go:629] Waited for 189.288949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/default/serviceaccounts
	I0729 13:29:55.542120  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/default/serviceaccounts
	I0729 13:29:55.542127  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:55.542137  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:55.542141  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:55.545898  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:55.546117  992950 default_sa.go:45] found service account: "default"
	I0729 13:29:55.546131  992950 default_sa.go:55] duration metric: took 193.466691ms for default service account to be created ...
	I0729 13:29:55.546140  992950 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 13:29:55.742596  992950 request.go:629] Waited for 196.370929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods
	I0729 13:29:55.742659  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods
	I0729 13:29:55.742664  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:55.742672  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:55.742676  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:55.747839  992950 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 13:29:55.753916  992950 system_pods.go:86] 17 kube-system pods found
	I0729 13:29:55.753944  992950 system_pods.go:89] "coredns-7db6d8ff4d-9jrnl" [0453ed97-efb4-41c1-8bfb-e7e004e618e0] Running
	I0729 13:29:55.753950  992950 system_pods.go:89] "coredns-7db6d8ff4d-gcf7q" [196981ba-ed16-427c-ae8b-9b7e8ff36be2] Running
	I0729 13:29:55.753954  992950 system_pods.go:89] "etcd-ha-104111" [309561db-8f30-4b42-8252-e02d9a26ec2e] Running
	I0729 13:29:55.753959  992950 system_pods.go:89] "etcd-ha-104111-m02" [4f09acca-1baa-4eba-8ef4-eb3e2b64512c] Running
	I0729 13:29:55.753962  992950 system_pods.go:89] "kindnet-9phpm" [60e9c45f-5176-492e-90c7-49b0201afe1e] Running
	I0729 13:29:55.753967  992950 system_pods.go:89] "kindnet-njndz" [a0477f9b-b1ff-49d8-8f39-21ffb84377e9] Running
	I0729 13:29:55.753970  992950 system_pods.go:89] "kube-apiserver-ha-104111" [d546ecd4-9bdb-4e41-9e4a-74d0c81359d5] Running
	I0729 13:29:55.753974  992950 system_pods.go:89] "kube-apiserver-ha-104111-m02" [70bd608c-3ebe-4306-8ec9-61c254ca5261] Running
	I0729 13:29:55.753979  992950 system_pods.go:89] "kube-controller-manager-ha-104111" [03be8232-ff90-43e1-87e0-5d61aeaa7c96] Running
	I0729 13:29:55.753983  992950 system_pods.go:89] "kube-controller-manager-ha-104111-m02" [d2ca4758-3c38-4655-8bfb-b5a64b0b6bca] Running
	I0729 13:29:55.753987  992950 system_pods.go:89] "kube-proxy-5dnvv" [2fb3553e-b114-4528-bf9a-1765356bb2a4] Running
	I0729 13:29:55.753990  992950 system_pods.go:89] "kube-proxy-n6kkf" [4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1] Running
	I0729 13:29:55.753994  992950 system_pods.go:89] "kube-scheduler-ha-104111" [3236068e-5891-4cb7-aa91-8aaf93260f3a] Running
	I0729 13:29:55.753998  992950 system_pods.go:89] "kube-scheduler-ha-104111-m02" [01a2d6c2-859d-44e8-9d53-0d257b4b4a1c] Running
	I0729 13:29:55.754003  992950 system_pods.go:89] "kube-vip-ha-104111" [edfeb506-2884-4406-92cf-c35fce56d7c4] Running
	I0729 13:29:55.754008  992950 system_pods.go:89] "kube-vip-ha-104111-m02" [bcc970d3-1717-4971-8216-7526fe2028ba] Running
	I0729 13:29:55.754013  992950 system_pods.go:89] "storage-provisioner" [b61cc52e-771b-484a-99d6-8963665cb1e8] Running
	I0729 13:29:55.754020  992950 system_pods.go:126] duration metric: took 207.873507ms to wait for k8s-apps to be running ...
	I0729 13:29:55.754031  992950 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 13:29:55.754077  992950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:29:55.771125  992950 system_svc.go:56] duration metric: took 17.080254ms WaitForService to wait for kubelet
	I0729 13:29:55.771170  992950 kubeadm.go:582] duration metric: took 19.638499805s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:29:55.771200  992950 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:29:55.942985  992950 request.go:629] Waited for 171.654425ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes
	I0729 13:29:55.943057  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes
	I0729 13:29:55.943062  992950 round_trippers.go:469] Request Headers:
	I0729 13:29:55.943071  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:29:55.943078  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:29:55.946554  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:29:55.947499  992950 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:29:55.947526  992950 node_conditions.go:123] node cpu capacity is 2
	I0729 13:29:55.947539  992950 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:29:55.947608  992950 node_conditions.go:123] node cpu capacity is 2
	I0729 13:29:55.947622  992950 node_conditions.go:105] duration metric: took 176.415483ms to run NodePressure ...
	I0729 13:29:55.947637  992950 start.go:241] waiting for startup goroutines ...
	I0729 13:29:55.947671  992950 start.go:255] writing updated cluster config ...
	I0729 13:29:55.949846  992950 out.go:177] 
	I0729 13:29:55.951228  992950 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:29:55.951315  992950 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/config.json ...
	I0729 13:29:55.952728  992950 out.go:177] * Starting "ha-104111-m03" control-plane node in "ha-104111" cluster
	I0729 13:29:55.953806  992950 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:29:55.953826  992950 cache.go:56] Caching tarball of preloaded images
	I0729 13:29:55.953933  992950 preload.go:172] Found /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:29:55.953945  992950 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 13:29:55.954027  992950 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/config.json ...
	I0729 13:29:55.954183  992950 start.go:360] acquireMachinesLock for ha-104111-m03: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:29:55.954225  992950 start.go:364] duration metric: took 22.497µs to acquireMachinesLock for "ha-104111-m03"
	I0729 13:29:55.954243  992950 start.go:93] Provisioning new machine with config: &{Name:ha-104111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-104111 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:29:55.954333  992950 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0729 13:29:55.955676  992950 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 13:29:55.955757  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:29:55.955801  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:29:55.971742  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34321
	I0729 13:29:55.972185  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:29:55.972719  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:29:55.972745  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:29:55.973162  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:29:55.973377  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetMachineName
	I0729 13:29:55.973607  992950 main.go:141] libmachine: (ha-104111-m03) Calling .DriverName
	I0729 13:29:55.973773  992950 start.go:159] libmachine.API.Create for "ha-104111" (driver="kvm2")
	I0729 13:29:55.973804  992950 client.go:168] LocalClient.Create starting
	I0729 13:29:55.973839  992950 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem
	I0729 13:29:55.973879  992950 main.go:141] libmachine: Decoding PEM data...
	I0729 13:29:55.973897  992950 main.go:141] libmachine: Parsing certificate...
	I0729 13:29:55.973971  992950 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem
	I0729 13:29:55.973997  992950 main.go:141] libmachine: Decoding PEM data...
	I0729 13:29:55.974013  992950 main.go:141] libmachine: Parsing certificate...
	I0729 13:29:55.974038  992950 main.go:141] libmachine: Running pre-create checks...
	I0729 13:29:55.974050  992950 main.go:141] libmachine: (ha-104111-m03) Calling .PreCreateCheck
	I0729 13:29:55.974238  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetConfigRaw
	I0729 13:29:55.974660  992950 main.go:141] libmachine: Creating machine...
	I0729 13:29:55.974675  992950 main.go:141] libmachine: (ha-104111-m03) Calling .Create
	I0729 13:29:55.974838  992950 main.go:141] libmachine: (ha-104111-m03) Creating KVM machine...
	I0729 13:29:55.976239  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found existing default KVM network
	I0729 13:29:55.976310  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found existing private KVM network mk-ha-104111
	I0729 13:29:55.976508  992950 main.go:141] libmachine: (ha-104111-m03) Setting up store path in /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03 ...
	I0729 13:29:55.976540  992950 main.go:141] libmachine: (ha-104111-m03) Building disk image from file:///home/jenkins/minikube-integration/19338-974764/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 13:29:55.976594  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:29:55.976481  993689 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 13:29:55.976699  992950 main.go:141] libmachine: (ha-104111-m03) Downloading /home/jenkins/minikube-integration/19338-974764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19338-974764/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 13:29:56.253824  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:29:56.253690  993689 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/id_rsa...
	I0729 13:29:56.448014  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:29:56.447897  993689 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/ha-104111-m03.rawdisk...
	I0729 13:29:56.448040  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Writing magic tar header
	I0729 13:29:56.448056  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Writing SSH key tar header
	I0729 13:29:56.448064  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:29:56.448041  993689 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03 ...
	I0729 13:29:56.448209  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03
	I0729 13:29:56.448234  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube/machines
	I0729 13:29:56.448248  992950 main.go:141] libmachine: (ha-104111-m03) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03 (perms=drwx------)
	I0729 13:29:56.448269  992950 main.go:141] libmachine: (ha-104111-m03) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube/machines (perms=drwxr-xr-x)
	I0729 13:29:56.448284  992950 main.go:141] libmachine: (ha-104111-m03) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube (perms=drwxr-xr-x)
	I0729 13:29:56.448300  992950 main.go:141] libmachine: (ha-104111-m03) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764 (perms=drwxrwxr-x)
	I0729 13:29:56.448314  992950 main.go:141] libmachine: (ha-104111-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 13:29:56.448328  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 13:29:56.448343  992950 main.go:141] libmachine: (ha-104111-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 13:29:56.448355  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764
	I0729 13:29:56.448370  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 13:29:56.448381  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Checking permissions on dir: /home/jenkins
	I0729 13:29:56.448394  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Checking permissions on dir: /home
	I0729 13:29:56.448405  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Skipping /home - not owner
	I0729 13:29:56.448432  992950 main.go:141] libmachine: (ha-104111-m03) Creating domain...
	I0729 13:29:56.449331  992950 main.go:141] libmachine: (ha-104111-m03) define libvirt domain using xml: 
	I0729 13:29:56.449353  992950 main.go:141] libmachine: (ha-104111-m03) <domain type='kvm'>
	I0729 13:29:56.449364  992950 main.go:141] libmachine: (ha-104111-m03)   <name>ha-104111-m03</name>
	I0729 13:29:56.449374  992950 main.go:141] libmachine: (ha-104111-m03)   <memory unit='MiB'>2200</memory>
	I0729 13:29:56.449380  992950 main.go:141] libmachine: (ha-104111-m03)   <vcpu>2</vcpu>
	I0729 13:29:56.449388  992950 main.go:141] libmachine: (ha-104111-m03)   <features>
	I0729 13:29:56.449415  992950 main.go:141] libmachine: (ha-104111-m03)     <acpi/>
	I0729 13:29:56.449436  992950 main.go:141] libmachine: (ha-104111-m03)     <apic/>
	I0729 13:29:56.449448  992950 main.go:141] libmachine: (ha-104111-m03)     <pae/>
	I0729 13:29:56.449456  992950 main.go:141] libmachine: (ha-104111-m03)     
	I0729 13:29:56.449461  992950 main.go:141] libmachine: (ha-104111-m03)   </features>
	I0729 13:29:56.449469  992950 main.go:141] libmachine: (ha-104111-m03)   <cpu mode='host-passthrough'>
	I0729 13:29:56.449476  992950 main.go:141] libmachine: (ha-104111-m03)   
	I0729 13:29:56.449486  992950 main.go:141] libmachine: (ha-104111-m03)   </cpu>
	I0729 13:29:56.449498  992950 main.go:141] libmachine: (ha-104111-m03)   <os>
	I0729 13:29:56.449512  992950 main.go:141] libmachine: (ha-104111-m03)     <type>hvm</type>
	I0729 13:29:56.449523  992950 main.go:141] libmachine: (ha-104111-m03)     <boot dev='cdrom'/>
	I0729 13:29:56.449533  992950 main.go:141] libmachine: (ha-104111-m03)     <boot dev='hd'/>
	I0729 13:29:56.449551  992950 main.go:141] libmachine: (ha-104111-m03)     <bootmenu enable='no'/>
	I0729 13:29:56.449559  992950 main.go:141] libmachine: (ha-104111-m03)   </os>
	I0729 13:29:56.449567  992950 main.go:141] libmachine: (ha-104111-m03)   <devices>
	I0729 13:29:56.449583  992950 main.go:141] libmachine: (ha-104111-m03)     <disk type='file' device='cdrom'>
	I0729 13:29:56.449600  992950 main.go:141] libmachine: (ha-104111-m03)       <source file='/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/boot2docker.iso'/>
	I0729 13:29:56.449611  992950 main.go:141] libmachine: (ha-104111-m03)       <target dev='hdc' bus='scsi'/>
	I0729 13:29:56.449622  992950 main.go:141] libmachine: (ha-104111-m03)       <readonly/>
	I0729 13:29:56.449632  992950 main.go:141] libmachine: (ha-104111-m03)     </disk>
	I0729 13:29:56.449648  992950 main.go:141] libmachine: (ha-104111-m03)     <disk type='file' device='disk'>
	I0729 13:29:56.449660  992950 main.go:141] libmachine: (ha-104111-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 13:29:56.449676  992950 main.go:141] libmachine: (ha-104111-m03)       <source file='/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/ha-104111-m03.rawdisk'/>
	I0729 13:29:56.449687  992950 main.go:141] libmachine: (ha-104111-m03)       <target dev='hda' bus='virtio'/>
	I0729 13:29:56.449707  992950 main.go:141] libmachine: (ha-104111-m03)     </disk>
	I0729 13:29:56.449721  992950 main.go:141] libmachine: (ha-104111-m03)     <interface type='network'>
	I0729 13:29:56.449729  992950 main.go:141] libmachine: (ha-104111-m03)       <source network='mk-ha-104111'/>
	I0729 13:29:56.449736  992950 main.go:141] libmachine: (ha-104111-m03)       <model type='virtio'/>
	I0729 13:29:56.449748  992950 main.go:141] libmachine: (ha-104111-m03)     </interface>
	I0729 13:29:56.449759  992950 main.go:141] libmachine: (ha-104111-m03)     <interface type='network'>
	I0729 13:29:56.449768  992950 main.go:141] libmachine: (ha-104111-m03)       <source network='default'/>
	I0729 13:29:56.449778  992950 main.go:141] libmachine: (ha-104111-m03)       <model type='virtio'/>
	I0729 13:29:56.449789  992950 main.go:141] libmachine: (ha-104111-m03)     </interface>
	I0729 13:29:56.449799  992950 main.go:141] libmachine: (ha-104111-m03)     <serial type='pty'>
	I0729 13:29:56.449808  992950 main.go:141] libmachine: (ha-104111-m03)       <target port='0'/>
	I0729 13:29:56.449814  992950 main.go:141] libmachine: (ha-104111-m03)     </serial>
	I0729 13:29:56.449822  992950 main.go:141] libmachine: (ha-104111-m03)     <console type='pty'>
	I0729 13:29:56.449833  992950 main.go:141] libmachine: (ha-104111-m03)       <target type='serial' port='0'/>
	I0729 13:29:56.449845  992950 main.go:141] libmachine: (ha-104111-m03)     </console>
	I0729 13:29:56.449859  992950 main.go:141] libmachine: (ha-104111-m03)     <rng model='virtio'>
	I0729 13:29:56.449873  992950 main.go:141] libmachine: (ha-104111-m03)       <backend model='random'>/dev/random</backend>
	I0729 13:29:56.449883  992950 main.go:141] libmachine: (ha-104111-m03)     </rng>
	I0729 13:29:56.449891  992950 main.go:141] libmachine: (ha-104111-m03)     
	I0729 13:29:56.449901  992950 main.go:141] libmachine: (ha-104111-m03)     
	I0729 13:29:56.449912  992950 main.go:141] libmachine: (ha-104111-m03)   </devices>
	I0729 13:29:56.449927  992950 main.go:141] libmachine: (ha-104111-m03) </domain>
	I0729 13:29:56.449939  992950 main.go:141] libmachine: (ha-104111-m03) 
	I0729 13:29:56.457215  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:8c:01:54 in network default
	I0729 13:29:56.457786  992950 main.go:141] libmachine: (ha-104111-m03) Ensuring networks are active...
	I0729 13:29:56.457811  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:29:56.458421  992950 main.go:141] libmachine: (ha-104111-m03) Ensuring network default is active
	I0729 13:29:56.458737  992950 main.go:141] libmachine: (ha-104111-m03) Ensuring network mk-ha-104111 is active
	I0729 13:29:56.459072  992950 main.go:141] libmachine: (ha-104111-m03) Getting domain xml...
	I0729 13:29:56.459756  992950 main.go:141] libmachine: (ha-104111-m03) Creating domain...
	I0729 13:29:57.501999  992950 main.go:141] libmachine: (ha-104111-m03) Waiting to get IP...
	I0729 13:29:57.502803  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:29:57.503209  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:29:57.503262  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:29:57.503180  993689 retry.go:31] will retry after 298.469962ms: waiting for machine to come up
	I0729 13:29:57.803846  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:29:57.804428  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:29:57.804459  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:29:57.804372  993689 retry.go:31] will retry after 381.821251ms: waiting for machine to come up
	I0729 13:29:58.187924  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:29:58.188495  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:29:58.188523  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:29:58.188455  993689 retry.go:31] will retry after 434.823731ms: waiting for machine to come up
	I0729 13:29:58.625115  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:29:58.625596  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:29:58.625626  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:29:58.625546  993689 retry.go:31] will retry after 407.070954ms: waiting for machine to come up
	I0729 13:29:59.033847  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:29:59.034305  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:29:59.034337  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:29:59.034245  993689 retry.go:31] will retry after 705.30597ms: waiting for machine to come up
	I0729 13:29:59.741197  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:29:59.741542  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:29:59.741569  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:29:59.741514  993689 retry.go:31] will retry after 735.075984ms: waiting for machine to come up
	I0729 13:30:00.478330  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:00.478782  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:30:00.478820  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:30:00.478706  993689 retry.go:31] will retry after 775.52294ms: waiting for machine to come up
	I0729 13:30:01.255703  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:01.256209  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:30:01.256236  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:30:01.256147  993689 retry.go:31] will retry after 1.484398935s: waiting for machine to come up
	I0729 13:30:02.742528  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:02.742969  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:30:02.742999  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:30:02.742900  993689 retry.go:31] will retry after 1.641905411s: waiting for machine to come up
	I0729 13:30:04.386251  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:04.386697  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:30:04.386726  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:30:04.386639  993689 retry.go:31] will retry after 2.116497134s: waiting for machine to come up
	I0729 13:30:06.505074  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:06.505599  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:30:06.505629  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:30:06.505541  993689 retry.go:31] will retry after 2.589119157s: waiting for machine to come up
	I0729 13:30:09.097703  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:09.098114  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:30:09.098141  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:30:09.098056  993689 retry.go:31] will retry after 2.52148825s: waiting for machine to come up
	I0729 13:30:11.621108  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:11.621529  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:30:11.621559  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:30:11.621473  993689 retry.go:31] will retry after 3.286341726s: waiting for machine to come up
	I0729 13:30:14.911901  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:14.912230  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find current IP address of domain ha-104111-m03 in network mk-ha-104111
	I0729 13:30:14.912253  992950 main.go:141] libmachine: (ha-104111-m03) DBG | I0729 13:30:14.912211  993689 retry.go:31] will retry after 5.551884704s: waiting for machine to come up
	I0729 13:30:20.469159  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:20.469676  992950 main.go:141] libmachine: (ha-104111-m03) Found IP for machine: 192.168.39.202
	I0729 13:30:20.469709  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has current primary IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:20.469718  992950 main.go:141] libmachine: (ha-104111-m03) Reserving static IP address...
	I0729 13:30:20.470068  992950 main.go:141] libmachine: (ha-104111-m03) DBG | unable to find host DHCP lease matching {name: "ha-104111-m03", mac: "52:54:00:4a:86:be", ip: "192.168.39.202"} in network mk-ha-104111
	I0729 13:30:20.544666  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Getting to WaitForSSH function...
	I0729 13:30:20.544699  992950 main.go:141] libmachine: (ha-104111-m03) Reserved static IP address: 192.168.39.202
	I0729 13:30:20.544713  992950 main.go:141] libmachine: (ha-104111-m03) Waiting for SSH to be available...
	I0729 13:30:20.547598  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:20.548127  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:20.548150  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:20.548322  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Using SSH client type: external
	I0729 13:30:20.548352  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/id_rsa (-rw-------)
	I0729 13:30:20.548385  992950 main.go:141] libmachine: (ha-104111-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:30:20.548401  992950 main.go:141] libmachine: (ha-104111-m03) DBG | About to run SSH command:
	I0729 13:30:20.548427  992950 main.go:141] libmachine: (ha-104111-m03) DBG | exit 0
	I0729 13:30:20.676697  992950 main.go:141] libmachine: (ha-104111-m03) DBG | SSH cmd err, output: <nil>: 
	I0729 13:30:20.677036  992950 main.go:141] libmachine: (ha-104111-m03) KVM machine creation complete!
	I0729 13:30:20.677379  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetConfigRaw
	I0729 13:30:20.677988  992950 main.go:141] libmachine: (ha-104111-m03) Calling .DriverName
	I0729 13:30:20.678208  992950 main.go:141] libmachine: (ha-104111-m03) Calling .DriverName
	I0729 13:30:20.678403  992950 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 13:30:20.678419  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetState
	I0729 13:30:20.679836  992950 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 13:30:20.679856  992950 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 13:30:20.679867  992950 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 13:30:20.679876  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:30:20.681994  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:20.682351  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:20.682392  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:20.682491  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:30:20.682718  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:20.682875  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:20.683016  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:30:20.683200  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:30:20.683545  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0729 13:30:20.683563  992950 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 13:30:20.787942  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:30:20.787977  992950 main.go:141] libmachine: Detecting the provisioner...
	I0729 13:30:20.787989  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:30:20.790932  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:20.791361  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:20.791388  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:20.791594  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:30:20.791816  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:20.792009  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:20.792192  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:30:20.792362  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:30:20.792557  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0729 13:30:20.792570  992950 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 13:30:20.896951  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 13:30:20.897050  992950 main.go:141] libmachine: found compatible host: buildroot
	I0729 13:30:20.897066  992950 main.go:141] libmachine: Provisioning with buildroot...
	I0729 13:30:20.897077  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetMachineName
	I0729 13:30:20.897340  992950 buildroot.go:166] provisioning hostname "ha-104111-m03"
	I0729 13:30:20.897371  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetMachineName
	I0729 13:30:20.897578  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:30:20.899994  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:20.900430  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:20.900460  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:20.900628  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:30:20.900815  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:20.900978  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:20.901122  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:30:20.901292  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:30:20.901485  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0729 13:30:20.901498  992950 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-104111-m03 && echo "ha-104111-m03" | sudo tee /etc/hostname
	I0729 13:30:21.019248  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-104111-m03
	
	I0729 13:30:21.019280  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:30:21.022097  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.022588  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:21.022619  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.022795  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:30:21.022992  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:21.023171  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:21.023351  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:30:21.023553  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:30:21.023743  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0729 13:30:21.023757  992950 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-104111-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-104111-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-104111-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:30:21.139606  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:30:21.139657  992950 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 13:30:21.139686  992950 buildroot.go:174] setting up certificates
	I0729 13:30:21.139702  992950 provision.go:84] configureAuth start
	I0729 13:30:21.139721  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetMachineName
	I0729 13:30:21.140056  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetIP
	I0729 13:30:21.142856  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.143218  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:21.143249  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.143387  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:30:21.145592  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.145993  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:21.146028  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.146134  992950 provision.go:143] copyHostCerts
	I0729 13:30:21.146169  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 13:30:21.146215  992950 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 13:30:21.146227  992950 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 13:30:21.146309  992950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 13:30:21.146433  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 13:30:21.146460  992950 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 13:30:21.146470  992950 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 13:30:21.146506  992950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 13:30:21.146573  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 13:30:21.146596  992950 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 13:30:21.146605  992950 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 13:30:21.146639  992950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 13:30:21.146703  992950 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.ha-104111-m03 san=[127.0.0.1 192.168.39.202 ha-104111-m03 localhost minikube]
	I0729 13:30:21.316822  992950 provision.go:177] copyRemoteCerts
	I0729 13:30:21.316901  992950 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:30:21.316935  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:30:21.319677  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.320091  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:21.320125  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.320317  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:30:21.320533  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:21.320709  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:30:21.320816  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/id_rsa Username:docker}
	I0729 13:30:21.403366  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 13:30:21.403436  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:30:21.428787  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 13:30:21.428855  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 13:30:21.453361  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 13:30:21.453454  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 13:30:21.477890  992950 provision.go:87] duration metric: took 338.17007ms to configureAuth
	I0729 13:30:21.477919  992950 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:30:21.478156  992950 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:30:21.478254  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:30:21.480971  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.481358  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:21.481390  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.481577  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:30:21.481795  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:21.481996  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:21.482132  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:30:21.482312  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:30:21.482475  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0729 13:30:21.482489  992950 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:30:21.755980  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:30:21.756019  992950 main.go:141] libmachine: Checking connection to Docker...
	I0729 13:30:21.756032  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetURL
	I0729 13:30:21.757432  992950 main.go:141] libmachine: (ha-104111-m03) DBG | Using libvirt version 6000000
	I0729 13:30:21.759897  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.760258  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:21.760284  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.760514  992950 main.go:141] libmachine: Docker is up and running!
	I0729 13:30:21.760534  992950 main.go:141] libmachine: Reticulating splines...
	I0729 13:30:21.760544  992950 client.go:171] duration metric: took 25.786731153s to LocalClient.Create
	I0729 13:30:21.760574  992950 start.go:167] duration metric: took 25.786802086s to libmachine.API.Create "ha-104111"
	I0729 13:30:21.760588  992950 start.go:293] postStartSetup for "ha-104111-m03" (driver="kvm2")
	I0729 13:30:21.760601  992950 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:30:21.760634  992950 main.go:141] libmachine: (ha-104111-m03) Calling .DriverName
	I0729 13:30:21.760948  992950 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:30:21.760979  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:30:21.763320  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.763742  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:21.763769  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.764026  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:30:21.764246  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:21.764443  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:30:21.764596  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/id_rsa Username:docker}
	I0729 13:30:21.847034  992950 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:30:21.851637  992950 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:30:21.851664  992950 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 13:30:21.851727  992950 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 13:30:21.851798  992950 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 13:30:21.851810  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> /etc/ssl/certs/9820462.pem
	I0729 13:30:21.851888  992950 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:30:21.861246  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 13:30:21.885557  992950 start.go:296] duration metric: took 124.943682ms for postStartSetup
	I0729 13:30:21.885625  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetConfigRaw
	I0729 13:30:21.886214  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetIP
	I0729 13:30:21.888965  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.889335  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:21.889362  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.889604  992950 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/config.json ...
	I0729 13:30:21.889793  992950 start.go:128] duration metric: took 25.935449767s to createHost
	I0729 13:30:21.889818  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:30:21.892440  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.892894  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:21.892924  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:21.893045  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:30:21.893277  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:21.893483  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:21.893715  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:30:21.893933  992950 main.go:141] libmachine: Using SSH client type: native
	I0729 13:30:21.894188  992950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0729 13:30:21.894204  992950 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:30:22.000945  992950 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722259821.979624969
	
	I0729 13:30:22.000977  992950 fix.go:216] guest clock: 1722259821.979624969
	I0729 13:30:22.000988  992950 fix.go:229] Guest: 2024-07-29 13:30:21.979624969 +0000 UTC Remote: 2024-07-29 13:30:21.889805218 +0000 UTC m=+151.135809526 (delta=89.819751ms)
	I0729 13:30:22.001008  992950 fix.go:200] guest clock delta is within tolerance: 89.819751ms
	I0729 13:30:22.001016  992950 start.go:83] releasing machines lock for "ha-104111-m03", held for 26.046780651s
	I0729 13:30:22.001044  992950 main.go:141] libmachine: (ha-104111-m03) Calling .DriverName
	I0729 13:30:22.001303  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetIP
	I0729 13:30:22.003814  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:22.004227  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:22.004256  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:22.006586  992950 out.go:177] * Found network options:
	I0729 13:30:22.008030  992950 out.go:177]   - NO_PROXY=192.168.39.120,192.168.39.140
	W0729 13:30:22.009191  992950 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 13:30:22.009215  992950 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 13:30:22.009229  992950 main.go:141] libmachine: (ha-104111-m03) Calling .DriverName
	I0729 13:30:22.009810  992950 main.go:141] libmachine: (ha-104111-m03) Calling .DriverName
	I0729 13:30:22.010000  992950 main.go:141] libmachine: (ha-104111-m03) Calling .DriverName
	I0729 13:30:22.010103  992950 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:30:22.010144  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	W0729 13:30:22.010214  992950 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 13:30:22.010241  992950 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 13:30:22.010304  992950 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:30:22.010327  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:30:22.013130  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:22.013155  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:22.013541  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:22.013573  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:22.013601  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:22.013617  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:22.013666  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:30:22.013857  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:30:22.013900  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:22.014053  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:30:22.014053  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:30:22.014235  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/id_rsa Username:docker}
	I0729 13:30:22.014254  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:30:22.014400  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/id_rsa Username:docker}
	I0729 13:30:22.255424  992950 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:30:22.261881  992950 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:30:22.261952  992950 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:30:22.278604  992950 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:30:22.278629  992950 start.go:495] detecting cgroup driver to use...
	I0729 13:30:22.278697  992950 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:30:22.295977  992950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:30:22.309347  992950 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:30:22.309397  992950 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:30:22.323713  992950 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:30:22.337031  992950 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:30:22.473574  992950 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:30:22.617495  992950 docker.go:233] disabling docker service ...
	I0729 13:30:22.617591  992950 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:30:22.633090  992950 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:30:22.646863  992950 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:30:22.789995  992950 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:30:22.922701  992950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:30:22.937901  992950 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:30:22.956150  992950 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:30:22.956231  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:30:22.966595  992950 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:30:22.966668  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:30:22.977248  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:30:22.987581  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:30:22.997738  992950 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:30:23.008451  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:30:23.018602  992950 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:30:23.036643  992950 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:30:23.047392  992950 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:30:23.056283  992950 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 13:30:23.056336  992950 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 13:30:23.069826  992950 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:30:23.079322  992950 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:30:23.205697  992950 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:30:23.347229  992950 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:30:23.347318  992950 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:30:23.352355  992950 start.go:563] Will wait 60s for crictl version
	I0729 13:30:23.352426  992950 ssh_runner.go:195] Run: which crictl
	I0729 13:30:23.356428  992950 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:30:23.398875  992950 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:30:23.398966  992950 ssh_runner.go:195] Run: crio --version
	I0729 13:30:23.426519  992950 ssh_runner.go:195] Run: crio --version
	I0729 13:30:23.458962  992950 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:30:23.460276  992950 out.go:177]   - env NO_PROXY=192.168.39.120
	I0729 13:30:23.461523  992950 out.go:177]   - env NO_PROXY=192.168.39.120,192.168.39.140
	I0729 13:30:23.462580  992950 main.go:141] libmachine: (ha-104111-m03) Calling .GetIP
	I0729 13:30:23.465387  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:23.465731  992950 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:30:23.465762  992950 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:30:23.465993  992950 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 13:30:23.470979  992950 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:30:23.484119  992950 mustload.go:65] Loading cluster: ha-104111
	I0729 13:30:23.484371  992950 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:30:23.484678  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:30:23.484719  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:30:23.499671  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38223
	I0729 13:30:23.500085  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:30:23.500633  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:30:23.500658  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:30:23.501038  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:30:23.501245  992950 main.go:141] libmachine: (ha-104111) Calling .GetState
	I0729 13:30:23.502886  992950 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:30:23.503180  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:30:23.503222  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:30:23.518531  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43759
	I0729 13:30:23.518906  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:30:23.519357  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:30:23.519378  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:30:23.519682  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:30:23.519889  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:30:23.520051  992950 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111 for IP: 192.168.39.202
	I0729 13:30:23.520064  992950 certs.go:194] generating shared ca certs ...
	I0729 13:30:23.520083  992950 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:30:23.520218  992950 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 13:30:23.520254  992950 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 13:30:23.520264  992950 certs.go:256] generating profile certs ...
	I0729 13:30:23.520333  992950 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.key
	I0729 13:30:23.520359  992950 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.8a8c6f18
	I0729 13:30:23.520375  992950 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.8a8c6f18 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.120 192.168.39.140 192.168.39.202 192.168.39.254]
	I0729 13:30:23.883932  992950 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.8a8c6f18 ...
	I0729 13:30:23.883966  992950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.8a8c6f18: {Name:mk0835a088a954b28031c8441d71a4cb8d6f5a8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:30:23.884140  992950 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.8a8c6f18 ...
	I0729 13:30:23.884154  992950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.8a8c6f18: {Name:mk22c6a1527399308bc4fbf7c2a49423798bba4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:30:23.884227  992950 certs.go:381] copying /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.8a8c6f18 -> /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt
	I0729 13:30:23.884349  992950 certs.go:385] copying /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.8a8c6f18 -> /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key
	I0729 13:30:23.884493  992950 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key
	I0729 13:30:23.884516  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 13:30:23.884531  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 13:30:23.884543  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 13:30:23.884556  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 13:30:23.884568  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 13:30:23.884579  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 13:30:23.884590  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 13:30:23.884602  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 13:30:23.884651  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 13:30:23.884678  992950 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 13:30:23.884688  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:30:23.884710  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:30:23.884730  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:30:23.884750  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 13:30:23.884785  992950 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 13:30:23.884809  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem -> /usr/share/ca-certificates/982046.pem
	I0729 13:30:23.884823  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> /usr/share/ca-certificates/9820462.pem
	I0729 13:30:23.884835  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:30:23.884877  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:30:23.888048  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:30:23.888505  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:30:23.888535  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:30:23.888694  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:30:23.888939  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:30:23.889112  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:30:23.889264  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:30:23.964811  992950 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 13:30:23.971040  992950 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 13:30:23.982962  992950 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 13:30:23.987609  992950 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0729 13:30:23.998540  992950 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 13:30:24.002835  992950 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 13:30:24.013962  992950 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 13:30:24.018301  992950 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0729 13:30:24.029024  992950 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 13:30:24.033300  992950 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 13:30:24.045031  992950 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 13:30:24.050048  992950 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0729 13:30:24.061058  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:30:24.085701  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:30:24.108988  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:30:24.132493  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 13:30:24.156981  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0729 13:30:24.181050  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 13:30:24.206593  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:30:24.230412  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 13:30:24.253970  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 13:30:24.277190  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 13:30:24.299653  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:30:24.321868  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 13:30:24.337623  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0729 13:30:24.353693  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 13:30:24.369196  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0729 13:30:24.384909  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 13:30:24.401470  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0729 13:30:24.419437  992950 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 13:30:24.436902  992950 ssh_runner.go:195] Run: openssl version
	I0729 13:30:24.442870  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 13:30:24.453688  992950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 13:30:24.458054  992950 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 13:30:24.458106  992950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 13:30:24.463889  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 13:30:24.475761  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 13:30:24.486831  992950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 13:30:24.491240  992950 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 13:30:24.491302  992950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 13:30:24.497104  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:30:24.507186  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:30:24.518206  992950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:30:24.522594  992950 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:30:24.522645  992950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:30:24.528455  992950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:30:24.538684  992950 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:30:24.542773  992950 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 13:30:24.542833  992950 kubeadm.go:934] updating node {m03 192.168.39.202 8443 v1.30.3 crio true true} ...
	I0729 13:30:24.542929  992950 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-104111-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-104111 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:30:24.542964  992950 kube-vip.go:115] generating kube-vip config ...
	I0729 13:30:24.543000  992950 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 13:30:24.558593  992950 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 13:30:24.558671  992950 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 13:30:24.558743  992950 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 13:30:24.569427  992950 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 13:30:24.569489  992950 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 13:30:24.578874  992950 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0729 13:30:24.578906  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 13:30:24.578962  992950 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 13:30:24.578880  992950 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0729 13:30:24.578881  992950 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 13:30:24.579022  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 13:30:24.579041  992950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:30:24.579092  992950 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 13:30:24.583233  992950 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 13:30:24.583260  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 13:30:24.619828  992950 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 13:30:24.619841  992950 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 13:30:24.619870  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 13:30:24.619953  992950 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 13:30:24.662888  992950 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 13:30:24.662938  992950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 13:30:25.467099  992950 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 13:30:25.476637  992950 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0729 13:30:25.493254  992950 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:30:25.509447  992950 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 13:30:25.526326  992950 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 13:30:25.530232  992950 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 13:30:25.542198  992950 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:30:25.680050  992950 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:30:25.696990  992950 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:30:25.697435  992950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:30:25.697491  992950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:30:25.712979  992950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34381
	I0729 13:30:25.713467  992950 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:30:25.713992  992950 main.go:141] libmachine: Using API Version  1
	I0729 13:30:25.714023  992950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:30:25.714399  992950 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:30:25.714669  992950 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:30:25.714844  992950 start.go:317] joinCluster: &{Name:ha-104111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-104111 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:30:25.714980  992950 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 13:30:25.715004  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:30:25.717843  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:30:25.718353  992950 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:30:25.718386  992950 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:30:25.718564  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:30:25.718777  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:30:25.718943  992950 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:30:25.719120  992950 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:30:25.891140  992950 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:30:25.891207  992950 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jsru0n.6tmnapp7fvkxu10o --discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-104111-m03 --control-plane --apiserver-advertise-address=192.168.39.202 --apiserver-bind-port=8443"
	I0729 13:30:50.044737  992950 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jsru0n.6tmnapp7fvkxu10o --discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-104111-m03 --control-plane --apiserver-advertise-address=192.168.39.202 --apiserver-bind-port=8443": (24.153495806s)
	I0729 13:30:50.044780  992950 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 13:30:50.665990  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-104111-m03 minikube.k8s.io/updated_at=2024_07_29T13_30_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411 minikube.k8s.io/name=ha-104111 minikube.k8s.io/primary=false
	I0729 13:30:50.788873  992950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-104111-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 13:30:50.906265  992950 start.go:319] duration metric: took 25.191408416s to joinCluster
	I0729 13:30:50.906361  992950 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 13:30:50.906679  992950 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:30:50.907917  992950 out.go:177] * Verifying Kubernetes components...
	I0729 13:30:50.909274  992950 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:30:51.264205  992950 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:30:51.290785  992950 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 13:30:51.291111  992950 kapi.go:59] client config for ha-104111: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.crt", KeyFile:"/home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.key", CAFile:"/home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 13:30:51.291177  992950 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.120:8443
	I0729 13:30:51.291405  992950 node_ready.go:35] waiting up to 6m0s for node "ha-104111-m03" to be "Ready" ...
	I0729 13:30:51.291509  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:51.291519  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:51.291529  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:51.291540  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:51.294765  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:51.792443  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:51.792468  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:51.792478  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:51.792483  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:51.795927  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:52.292584  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:52.292605  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:52.292614  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:52.292618  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:52.295905  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:52.791812  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:52.791834  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:52.791841  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:52.791844  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:52.795407  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:53.292160  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:53.292182  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:53.292190  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:53.292196  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:53.295098  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:30:53.295609  992950 node_ready.go:53] node "ha-104111-m03" has status "Ready":"False"
	I0729 13:30:53.791680  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:53.791704  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:53.791714  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:53.791718  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:53.795013  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:54.292006  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:54.292032  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:54.292045  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:54.292052  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:54.297003  992950 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 13:30:54.792212  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:54.792235  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:54.792244  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:54.792251  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:54.795284  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:55.292380  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:55.292430  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:55.292444  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:55.292451  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:55.295654  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:55.296123  992950 node_ready.go:53] node "ha-104111-m03" has status "Ready":"False"
	I0729 13:30:55.791592  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:55.791617  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:55.791628  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:55.791633  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:55.795193  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:56.292358  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:56.292390  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:56.292402  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:56.292428  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:56.296131  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:56.792032  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:56.792064  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:56.792074  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:56.792078  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:56.795178  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:57.292556  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:57.292579  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:57.292587  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:57.292590  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:57.295538  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:30:57.792190  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:57.792214  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:57.792222  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:57.792226  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:57.795481  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:57.796329  992950 node_ready.go:53] node "ha-104111-m03" has status "Ready":"False"
	I0729 13:30:58.292138  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:58.292164  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:58.292173  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:58.292179  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:58.295374  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:58.792074  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:58.792100  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:58.792123  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:58.792141  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:58.795447  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:59.291598  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:59.291622  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:59.291631  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:59.291641  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:59.295477  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:59.792580  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:30:59.792606  992950 round_trippers.go:469] Request Headers:
	I0729 13:30:59.792615  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:30:59.792619  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:30:59.796075  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:30:59.796850  992950 node_ready.go:53] node "ha-104111-m03" has status "Ready":"False"
	I0729 13:31:00.291797  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:00.291823  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:00.291834  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:00.291839  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:00.295347  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:00.792261  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:00.792288  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:00.792300  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:00.792306  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:00.796719  992950 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 13:31:01.291849  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:01.291873  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:01.291882  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:01.291886  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:01.295201  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:01.792043  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:01.792073  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:01.792086  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:01.792092  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:01.795257  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:02.291801  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:02.291828  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:02.291839  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:02.291848  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:02.295433  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:02.295900  992950 node_ready.go:53] node "ha-104111-m03" has status "Ready":"False"
	I0729 13:31:02.792357  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:02.792381  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:02.792390  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:02.792395  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:02.796290  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:03.292642  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:03.292668  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:03.292680  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:03.292686  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:03.296168  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:03.791911  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:03.791935  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:03.791942  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:03.791947  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:03.795429  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:04.292377  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:04.292403  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:04.292425  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:04.292431  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:04.295741  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:04.296247  992950 node_ready.go:53] node "ha-104111-m03" has status "Ready":"False"
	I0729 13:31:04.791591  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:04.791614  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:04.791623  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:04.791631  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:04.794960  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:05.292230  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:05.292259  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:05.292270  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:05.292278  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:05.295806  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:05.791592  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:05.791613  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:05.791621  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:05.791627  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:05.794832  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:06.292260  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:06.292283  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:06.292291  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:06.292296  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:06.295902  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:06.296633  992950 node_ready.go:53] node "ha-104111-m03" has status "Ready":"False"
	I0729 13:31:06.791840  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:06.791864  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:06.791873  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:06.791877  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:06.795015  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:06.795654  992950 node_ready.go:49] node "ha-104111-m03" has status "Ready":"True"
	I0729 13:31:06.795672  992950 node_ready.go:38] duration metric: took 15.504251916s for node "ha-104111-m03" to be "Ready" ...
	I0729 13:31:06.795681  992950 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:31:06.795746  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods
	I0729 13:31:06.795755  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:06.795762  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:06.795769  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:06.803015  992950 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 13:31:06.809007  992950 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9jrnl" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:06.809109  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9jrnl
	I0729 13:31:06.809121  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:06.809131  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:06.809138  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:06.812118  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:31:06.812829  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:31:06.812849  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:06.812859  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:06.812864  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:06.815318  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:31:06.815905  992950 pod_ready.go:92] pod "coredns-7db6d8ff4d-9jrnl" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:06.815926  992950 pod_ready.go:81] duration metric: took 6.8965ms for pod "coredns-7db6d8ff4d-9jrnl" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:06.815935  992950 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gcf7q" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:06.815984  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gcf7q
	I0729 13:31:06.815991  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:06.815998  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:06.816001  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:06.818782  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:31:06.819606  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:31:06.819624  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:06.819634  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:06.819640  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:06.822497  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:31:06.823059  992950 pod_ready.go:92] pod "coredns-7db6d8ff4d-gcf7q" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:06.823077  992950 pod_ready.go:81] duration metric: took 7.13506ms for pod "coredns-7db6d8ff4d-gcf7q" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:06.823091  992950 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:06.823146  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/etcd-ha-104111
	I0729 13:31:06.823156  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:06.823166  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:06.823171  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:06.825912  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:31:06.826341  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:31:06.826357  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:06.826367  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:06.826374  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:06.828314  992950 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0729 13:31:06.828788  992950 pod_ready.go:92] pod "etcd-ha-104111" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:06.828811  992950 pod_ready.go:81] duration metric: took 5.712779ms for pod "etcd-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:06.828822  992950 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:06.828881  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/etcd-ha-104111-m02
	I0729 13:31:06.828891  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:06.828901  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:06.828908  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:06.830972  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:31:06.831472  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:31:06.831488  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:06.831499  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:06.831507  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:06.834527  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:06.835237  992950 pod_ready.go:92] pod "etcd-ha-104111-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:06.835254  992950 pod_ready.go:81] duration metric: took 6.425388ms for pod "etcd-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:06.835265  992950 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-104111-m03" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:06.992701  992950 request.go:629] Waited for 157.355132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/etcd-ha-104111-m03
	I0729 13:31:06.992772  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/etcd-ha-104111-m03
	I0729 13:31:06.992781  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:06.992789  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:06.992797  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:06.995815  992950 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 13:31:07.192562  992950 request.go:629] Waited for 196.036575ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:07.192650  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:07.192659  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:07.192667  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:07.192676  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:07.196062  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:07.196786  992950 pod_ready.go:92] pod "etcd-ha-104111-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:07.196819  992950 pod_ready.go:81] duration metric: took 361.540851ms for pod "etcd-ha-104111-m03" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:07.196843  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:07.392052  992950 request.go:629] Waited for 195.103878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-104111
	I0729 13:31:07.392114  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-104111
	I0729 13:31:07.392119  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:07.392126  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:07.392130  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:07.395341  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:07.592486  992950 request.go:629] Waited for 196.373004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:31:07.592563  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:31:07.592570  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:07.592580  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:07.592592  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:07.595625  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:07.596166  992950 pod_ready.go:92] pod "kube-apiserver-ha-104111" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:07.596185  992950 pod_ready.go:81] duration metric: took 399.330679ms for pod "kube-apiserver-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:07.596194  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:07.792271  992950 request.go:629] Waited for 195.988198ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-104111-m02
	I0729 13:31:07.792361  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-104111-m02
	I0729 13:31:07.792368  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:07.792378  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:07.792384  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:07.796068  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:07.992076  992950 request.go:629] Waited for 195.300254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:31:07.992166  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:31:07.992176  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:07.992184  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:07.992192  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:07.995318  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:07.996012  992950 pod_ready.go:92] pod "kube-apiserver-ha-104111-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:07.996032  992950 pod_ready.go:81] duration metric: took 399.831511ms for pod "kube-apiserver-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:07.996047  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-104111-m03" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:08.192110  992950 request.go:629] Waited for 195.9662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-104111-m03
	I0729 13:31:08.192197  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-104111-m03
	I0729 13:31:08.192202  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:08.192209  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:08.192214  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:08.195456  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:08.392690  992950 request.go:629] Waited for 196.415017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:08.392765  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:08.392770  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:08.392780  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:08.392786  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:08.396033  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:08.396951  992950 pod_ready.go:92] pod "kube-apiserver-ha-104111-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:08.396973  992950 pod_ready.go:81] duration metric: took 400.915579ms for pod "kube-apiserver-ha-104111-m03" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:08.396986  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:08.591942  992950 request.go:629] Waited for 194.865072ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-104111
	I0729 13:31:08.592027  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-104111
	I0729 13:31:08.592034  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:08.592056  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:08.592080  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:08.595146  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:08.792663  992950 request.go:629] Waited for 196.754804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:31:08.792779  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:31:08.792790  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:08.792803  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:08.792810  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:08.795970  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:08.796752  992950 pod_ready.go:92] pod "kube-controller-manager-ha-104111" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:08.796775  992950 pod_ready.go:81] duration metric: took 399.78008ms for pod "kube-controller-manager-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:08.796789  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:08.992803  992950 request.go:629] Waited for 195.916125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-104111-m02
	I0729 13:31:08.992866  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-104111-m02
	I0729 13:31:08.992872  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:08.992880  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:08.992885  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:08.996024  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:09.191925  992950 request.go:629] Waited for 195.149373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:31:09.192005  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:31:09.192013  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:09.192023  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:09.192030  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:09.196960  992950 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 13:31:09.197567  992950 pod_ready.go:92] pod "kube-controller-manager-ha-104111-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:09.197595  992950 pod_ready.go:81] duration metric: took 400.798938ms for pod "kube-controller-manager-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:09.197608  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-104111-m03" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:09.392759  992950 request.go:629] Waited for 195.045643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-104111-m03
	I0729 13:31:09.392822  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-104111-m03
	I0729 13:31:09.392827  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:09.392835  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:09.392839  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:09.396330  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:09.592501  992950 request.go:629] Waited for 195.398969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:09.592574  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:09.592579  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:09.592593  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:09.592600  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:09.596474  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:09.597178  992950 pod_ready.go:92] pod "kube-controller-manager-ha-104111-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:09.597207  992950 pod_ready.go:81] duration metric: took 399.586044ms for pod "kube-controller-manager-ha-104111-m03" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:09.597224  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5dnvv" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:09.792267  992950 request.go:629] Waited for 194.936789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5dnvv
	I0729 13:31:09.792386  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5dnvv
	I0729 13:31:09.792397  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:09.792425  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:09.792441  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:09.795782  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:09.991991  992950 request.go:629] Waited for 195.308276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:31:09.992053  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:31:09.992057  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:09.992065  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:09.992069  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:09.995661  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:09.996598  992950 pod_ready.go:92] pod "kube-proxy-5dnvv" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:09.996620  992950 pod_ready.go:81] duration metric: took 399.386649ms for pod "kube-proxy-5dnvv" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:09.996633  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m765x" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:10.192489  992950 request.go:629] Waited for 195.78034ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m765x
	I0729 13:31:10.192550  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m765x
	I0729 13:31:10.192556  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:10.192564  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:10.192570  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:10.195943  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:10.391862  992950 request.go:629] Waited for 195.282697ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:10.391945  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:10.391951  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:10.391959  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:10.391964  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:10.395213  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:10.396080  992950 pod_ready.go:92] pod "kube-proxy-m765x" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:10.396111  992950 pod_ready.go:81] duration metric: took 399.46256ms for pod "kube-proxy-m765x" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:10.396126  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n6kkf" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:10.592088  992950 request.go:629] Waited for 195.871002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n6kkf
	I0729 13:31:10.592163  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n6kkf
	I0729 13:31:10.592170  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:10.592180  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:10.592196  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:10.595795  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:10.792506  992950 request.go:629] Waited for 195.630063ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:31:10.792566  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:31:10.792571  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:10.792579  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:10.792584  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:10.796159  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:10.796903  992950 pod_ready.go:92] pod "kube-proxy-n6kkf" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:10.796927  992950 pod_ready.go:81] duration metric: took 400.793171ms for pod "kube-proxy-n6kkf" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:10.796937  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:10.991987  992950 request.go:629] Waited for 194.970741ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-104111
	I0729 13:31:10.992068  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-104111
	I0729 13:31:10.992074  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:10.992083  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:10.992089  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:10.996135  992950 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 13:31:11.192252  992950 request.go:629] Waited for 195.285999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:31:11.192362  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111
	I0729 13:31:11.192373  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:11.192384  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:11.192403  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:11.195684  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:11.196262  992950 pod_ready.go:92] pod "kube-scheduler-ha-104111" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:11.196283  992950 pod_ready.go:81] duration metric: took 399.338583ms for pod "kube-scheduler-ha-104111" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:11.196293  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:11.392363  992950 request.go:629] Waited for 195.964444ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-104111-m02
	I0729 13:31:11.392456  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-104111-m02
	I0729 13:31:11.392463  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:11.392476  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:11.392489  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:11.395644  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:11.592712  992950 request.go:629] Waited for 196.367309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:31:11.592781  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m02
	I0729 13:31:11.592788  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:11.592799  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:11.592807  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:11.595949  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:11.596544  992950 pod_ready.go:92] pod "kube-scheduler-ha-104111-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:11.596568  992950 pod_ready.go:81] duration metric: took 400.267112ms for pod "kube-scheduler-ha-104111-m02" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:11.596585  992950 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-104111-m03" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:11.792562  992950 request.go:629] Waited for 195.888008ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-104111-m03
	I0729 13:31:11.792661  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-104111-m03
	I0729 13:31:11.792670  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:11.792677  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:11.792682  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:11.795752  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:11.992848  992950 request.go:629] Waited for 196.362624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:11.992931  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes/ha-104111-m03
	I0729 13:31:11.992936  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:11.992944  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:11.992950  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:11.996242  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:11.997212  992950 pod_ready.go:92] pod "kube-scheduler-ha-104111-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 13:31:11.997230  992950 pod_ready.go:81] duration metric: took 400.632488ms for pod "kube-scheduler-ha-104111-m03" in "kube-system" namespace to be "Ready" ...
	I0729 13:31:11.997242  992950 pod_ready.go:38] duration metric: took 5.201548599s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 13:31:11.997261  992950 api_server.go:52] waiting for apiserver process to appear ...
	I0729 13:31:11.997323  992950 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:31:12.015443  992950 api_server.go:72] duration metric: took 21.109037184s to wait for apiserver process to appear ...
	I0729 13:31:12.015469  992950 api_server.go:88] waiting for apiserver healthz status ...
	I0729 13:31:12.015497  992950 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0729 13:31:12.021527  992950 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I0729 13:31:12.021620  992950 round_trippers.go:463] GET https://192.168.39.120:8443/version
	I0729 13:31:12.021632  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:12.021647  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:12.021655  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:12.022999  992950 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0729 13:31:12.023173  992950 api_server.go:141] control plane version: v1.30.3
	I0729 13:31:12.023198  992950 api_server.go:131] duration metric: took 7.721077ms to wait for apiserver health ...
	I0729 13:31:12.023207  992950 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 13:31:12.192885  992950 request.go:629] Waited for 169.588554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods
	I0729 13:31:12.192954  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods
	I0729 13:31:12.192959  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:12.192971  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:12.192974  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:12.199588  992950 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 13:31:12.208358  992950 system_pods.go:59] 24 kube-system pods found
	I0729 13:31:12.208391  992950 system_pods.go:61] "coredns-7db6d8ff4d-9jrnl" [0453ed97-efb4-41c1-8bfb-e7e004e618e0] Running
	I0729 13:31:12.208397  992950 system_pods.go:61] "coredns-7db6d8ff4d-gcf7q" [196981ba-ed16-427c-ae8b-9b7e8ff36be2] Running
	I0729 13:31:12.208401  992950 system_pods.go:61] "etcd-ha-104111" [309561db-8f30-4b42-8252-e02d9a26ec2e] Running
	I0729 13:31:12.208404  992950 system_pods.go:61] "etcd-ha-104111-m02" [4f09acca-1baa-4eba-8ef4-eb3e2b64512c] Running
	I0729 13:31:12.208425  992950 system_pods.go:61] "etcd-ha-104111-m03" [abbb320d-2480-4658-b404-c765904bb5ea] Running
	I0729 13:31:12.208430  992950 system_pods.go:61] "kindnet-9phpm" [60e9c45f-5176-492e-90c7-49b0201afe1e] Running
	I0729 13:31:12.208435  992950 system_pods.go:61] "kindnet-mt9dk" [5f1433be-0f2b-4502-a586-4014c7f23495] Running
	I0729 13:31:12.208440  992950 system_pods.go:61] "kindnet-njndz" [a0477f9b-b1ff-49d8-8f39-21ffb84377e9] Running
	I0729 13:31:12.208444  992950 system_pods.go:61] "kube-apiserver-ha-104111" [d546ecd4-9bdb-4e41-9e4a-74d0c81359d5] Running
	I0729 13:31:12.208447  992950 system_pods.go:61] "kube-apiserver-ha-104111-m02" [70bd608c-3ebe-4306-8ec9-61c254ca5261] Running
	I0729 13:31:12.208450  992950 system_pods.go:61] "kube-apiserver-ha-104111-m03" [8c1333cd-2e2a-4e55-af7a-6b399d6ecefa] Running
	I0729 13:31:12.208454  992950 system_pods.go:61] "kube-controller-manager-ha-104111" [03be8232-ff90-43e1-87e0-5d61aeaa7c96] Running
	I0729 13:31:12.208457  992950 system_pods.go:61] "kube-controller-manager-ha-104111-m02" [d2ca4758-3c38-4655-8bfb-b5a64b0b6bca] Running
	I0729 13:31:12.208461  992950 system_pods.go:61] "kube-controller-manager-ha-104111-m03" [ee0172da-49de-422e-b0cc-f015e6978f15] Running
	I0729 13:31:12.208464  992950 system_pods.go:61] "kube-proxy-5dnvv" [2fb3553e-b114-4528-bf9a-1765356bb2a4] Running
	I0729 13:31:12.208467  992950 system_pods.go:61] "kube-proxy-m765x" [d1051d27-125e-48d2-a3d5-3a2e99a2a04c] Running
	I0729 13:31:12.208474  992950 system_pods.go:61] "kube-proxy-n6kkf" [4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1] Running
	I0729 13:31:12.208477  992950 system_pods.go:61] "kube-scheduler-ha-104111" [3236068e-5891-4cb7-aa91-8aaf93260f3a] Running
	I0729 13:31:12.208481  992950 system_pods.go:61] "kube-scheduler-ha-104111-m02" [01a2d6c2-859d-44e8-9d53-0d257b4b4a1c] Running
	I0729 13:31:12.208485  992950 system_pods.go:61] "kube-scheduler-ha-104111-m03" [89bb0ec2-3f86-4801-bf24-2a038894a39f] Running
	I0729 13:31:12.208488  992950 system_pods.go:61] "kube-vip-ha-104111" [edfeb506-2884-4406-92cf-c35fce56d7c4] Running
	I0729 13:31:12.208491  992950 system_pods.go:61] "kube-vip-ha-104111-m02" [bcc970d3-1717-4971-8216-7526fe2028ba] Running
	I0729 13:31:12.208495  992950 system_pods.go:61] "kube-vip-ha-104111-m03" [5bd067b4-c367-4504-8ba2-4325efaa53a4] Running
	I0729 13:31:12.208498  992950 system_pods.go:61] "storage-provisioner" [b61cc52e-771b-484a-99d6-8963665cb1e8] Running
	I0729 13:31:12.208506  992950 system_pods.go:74] duration metric: took 185.291614ms to wait for pod list to return data ...
	I0729 13:31:12.208516  992950 default_sa.go:34] waiting for default service account to be created ...
	I0729 13:31:12.391894  992950 request.go:629] Waited for 183.299294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/default/serviceaccounts
	I0729 13:31:12.391955  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/default/serviceaccounts
	I0729 13:31:12.391960  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:12.391967  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:12.391973  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:12.395215  992950 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 13:31:12.395349  992950 default_sa.go:45] found service account: "default"
	I0729 13:31:12.395368  992950 default_sa.go:55] duration metric: took 186.84439ms for default service account to be created ...
	I0729 13:31:12.395382  992950 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 13:31:12.592777  992950 request.go:629] Waited for 197.236409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods
	I0729 13:31:12.592845  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/namespaces/kube-system/pods
	I0729 13:31:12.592852  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:12.592859  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:12.592864  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:12.599102  992950 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 13:31:12.606276  992950 system_pods.go:86] 24 kube-system pods found
	I0729 13:31:12.606303  992950 system_pods.go:89] "coredns-7db6d8ff4d-9jrnl" [0453ed97-efb4-41c1-8bfb-e7e004e618e0] Running
	I0729 13:31:12.606309  992950 system_pods.go:89] "coredns-7db6d8ff4d-gcf7q" [196981ba-ed16-427c-ae8b-9b7e8ff36be2] Running
	I0729 13:31:12.606314  992950 system_pods.go:89] "etcd-ha-104111" [309561db-8f30-4b42-8252-e02d9a26ec2e] Running
	I0729 13:31:12.606319  992950 system_pods.go:89] "etcd-ha-104111-m02" [4f09acca-1baa-4eba-8ef4-eb3e2b64512c] Running
	I0729 13:31:12.606323  992950 system_pods.go:89] "etcd-ha-104111-m03" [abbb320d-2480-4658-b404-c765904bb5ea] Running
	I0729 13:31:12.606327  992950 system_pods.go:89] "kindnet-9phpm" [60e9c45f-5176-492e-90c7-49b0201afe1e] Running
	I0729 13:31:12.606331  992950 system_pods.go:89] "kindnet-mt9dk" [5f1433be-0f2b-4502-a586-4014c7f23495] Running
	I0729 13:31:12.606335  992950 system_pods.go:89] "kindnet-njndz" [a0477f9b-b1ff-49d8-8f39-21ffb84377e9] Running
	I0729 13:31:12.606340  992950 system_pods.go:89] "kube-apiserver-ha-104111" [d546ecd4-9bdb-4e41-9e4a-74d0c81359d5] Running
	I0729 13:31:12.606344  992950 system_pods.go:89] "kube-apiserver-ha-104111-m02" [70bd608c-3ebe-4306-8ec9-61c254ca5261] Running
	I0729 13:31:12.606349  992950 system_pods.go:89] "kube-apiserver-ha-104111-m03" [8c1333cd-2e2a-4e55-af7a-6b399d6ecefa] Running
	I0729 13:31:12.606356  992950 system_pods.go:89] "kube-controller-manager-ha-104111" [03be8232-ff90-43e1-87e0-5d61aeaa7c96] Running
	I0729 13:31:12.606360  992950 system_pods.go:89] "kube-controller-manager-ha-104111-m02" [d2ca4758-3c38-4655-8bfb-b5a64b0b6bca] Running
	I0729 13:31:12.606367  992950 system_pods.go:89] "kube-controller-manager-ha-104111-m03" [ee0172da-49de-422e-b0cc-f015e6978f15] Running
	I0729 13:31:12.606371  992950 system_pods.go:89] "kube-proxy-5dnvv" [2fb3553e-b114-4528-bf9a-1765356bb2a4] Running
	I0729 13:31:12.606376  992950 system_pods.go:89] "kube-proxy-m765x" [d1051d27-125e-48d2-a3d5-3a2e99a2a04c] Running
	I0729 13:31:12.606380  992950 system_pods.go:89] "kube-proxy-n6kkf" [4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1] Running
	I0729 13:31:12.606386  992950 system_pods.go:89] "kube-scheduler-ha-104111" [3236068e-5891-4cb7-aa91-8aaf93260f3a] Running
	I0729 13:31:12.606391  992950 system_pods.go:89] "kube-scheduler-ha-104111-m02" [01a2d6c2-859d-44e8-9d53-0d257b4b4a1c] Running
	I0729 13:31:12.606399  992950 system_pods.go:89] "kube-scheduler-ha-104111-m03" [89bb0ec2-3f86-4801-bf24-2a038894a39f] Running
	I0729 13:31:12.606408  992950 system_pods.go:89] "kube-vip-ha-104111" [edfeb506-2884-4406-92cf-c35fce56d7c4] Running
	I0729 13:31:12.606414  992950 system_pods.go:89] "kube-vip-ha-104111-m02" [bcc970d3-1717-4971-8216-7526fe2028ba] Running
	I0729 13:31:12.606423  992950 system_pods.go:89] "kube-vip-ha-104111-m03" [5bd067b4-c367-4504-8ba2-4325efaa53a4] Running
	I0729 13:31:12.606429  992950 system_pods.go:89] "storage-provisioner" [b61cc52e-771b-484a-99d6-8963665cb1e8] Running
	I0729 13:31:12.606442  992950 system_pods.go:126] duration metric: took 211.050956ms to wait for k8s-apps to be running ...
	I0729 13:31:12.606454  992950 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 13:31:12.606528  992950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:31:12.623930  992950 system_svc.go:56] duration metric: took 17.466008ms WaitForService to wait for kubelet
	I0729 13:31:12.623966  992950 kubeadm.go:582] duration metric: took 21.717566553s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:31:12.623996  992950 node_conditions.go:102] verifying NodePressure condition ...
	I0729 13:31:12.792481  992950 request.go:629] Waited for 168.383302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.120:8443/api/v1/nodes
	I0729 13:31:12.792561  992950 round_trippers.go:463] GET https://192.168.39.120:8443/api/v1/nodes
	I0729 13:31:12.792568  992950 round_trippers.go:469] Request Headers:
	I0729 13:31:12.792576  992950 round_trippers.go:473]     Accept: application/json, */*
	I0729 13:31:12.792581  992950 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 13:31:12.796627  992950 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 13:31:12.797777  992950 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:31:12.797816  992950 node_conditions.go:123] node cpu capacity is 2
	I0729 13:31:12.797828  992950 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:31:12.797832  992950 node_conditions.go:123] node cpu capacity is 2
	I0729 13:31:12.797835  992950 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 13:31:12.797838  992950 node_conditions.go:123] node cpu capacity is 2
	I0729 13:31:12.797842  992950 node_conditions.go:105] duration metric: took 173.840002ms to run NodePressure ...
	I0729 13:31:12.797854  992950 start.go:241] waiting for startup goroutines ...
	I0729 13:31:12.797884  992950 start.go:255] writing updated cluster config ...
	I0729 13:31:12.798221  992950 ssh_runner.go:195] Run: rm -f paused
	I0729 13:31:12.853479  992950 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 13:31:12.855934  992950 out.go:177] * Done! kubectl is now configured to use "ha-104111" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 13:35:40 ha-104111 crio[679]: time="2024-07-29 13:35:40.745672242Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722260140745645125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a069376b-ad2b-4263-aab1-05caf867c6be name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:35:40 ha-104111 crio[679]: time="2024-07-29 13:35:40.746421421Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b12de9ee-92c8-4873-af7c-61c36ced95f0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:35:40 ha-104111 crio[679]: time="2024-07-29 13:35:40.746480531Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b12de9ee-92c8-4873-af7c-61c36ced95f0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:35:40 ha-104111 crio[679]: time="2024-07-29 13:35:40.746836893Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2a033e8feb22aa74a1673da73b3c5bab08248299304e6a34a7f7064468eeb5d,PodSandboxId:ab591310b6636199927f6aca9abcc3a68cb2149e7f2001e4ffcd7ce08d966de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722259875257833512,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7xsjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fbdab29-6a6d-4b47-8df5-641b9aad98f0,},Annotations:map[string]string{io.kubernetes.container.hash: 769a90,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b86114506804a9a93ef3ca6b2254579d26919db666922aefc5ccef849a81f98,PodSandboxId:53fbe065ff1134e62b3e0364c43521280c8ec8461e8fc752dc336ac259ee602f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722259743370087937,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61cc52e-771b-484a-99d6-8963665cb1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 492a79cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721762ac4017a84454febe3fd71ab6671be9e230d7b785627b43bdafe8478d56,PodSandboxId:d0c4b0845fee9c8c5a409d1d96017b0e56e37a4fb5f685b4e69bc4626c12ffd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259743321034127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9jrnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453ed97-efb4-41c1-8bfb-e7e004e618e0,},Annotations:map[string]string{io.kubernetes.container.hash: 72f497a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81eca3ce5b15d81536c74dc285118c00ec013710df992caad1867b7c5e7f75a1,PodSandboxId:32a1e9c01260e16fc517c9caebdd1556fbbfcaabc5ff83ba679e2ce763d3ee50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259743267402214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gcf7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196981ba-ed16
-427c-ae8b-9b7e8ff36be2,},Annotations:map[string]string{io.kubernetes.container.hash: 4a624f81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcba14c355c5fc17ddee3394c79f5ffaea681079c29edd33b122d7aa80c36f1,PodSandboxId:6b9961791d750fd9d9a7d40cf02ff0c0f6e938c724b2b0787ebeb23a431b9beb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONT
AINER_RUNNING,CreatedAt:1722259731581771295,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9phpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e9c45f-5176-492e-90c7-49b0201afe1e,},Annotations:map[string]string{io.kubernetes.container.hash: 867b7308,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc357136c66b6120efe2eee1197a8d0dabec7279ff50cf8ddea25182b0d4ae8,PodSandboxId:60e3f945e9e89cf01f1a900e939ca51214ea0d79a2a69da731b49606960a6d05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:17222597280
97863200,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6kkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,},Annotations:map[string]string{io.kubernetes.container.hash: 32c1dc3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50fe26dbcca1ab5b9d9fe7412b4950133069510ae36d9795ccd4866be11174cd,PodSandboxId:5433f25fd3565411866feae928f058891d075bb4e913fce385b7e59dd41bdaeb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722259709027
885819,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143501fa8691d69c4a62f32dafe175d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e80af660361f5449de6286725b48cc816a32581672735b35e4ac2c55495983d1,PodSandboxId:a33809de7d3a6efb269fca0ca670a49eb3a11c9845507c3110c8509574ae03e0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722259707029920791,Labels:map[string]string{i
o.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2438b1d75fb1de3aa096517b67661add,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cce61f41e5d4c78880527c2ae828948f376f275ed748786922e82b85a740b3,PodSandboxId:052a439cf2eb6a200a6faaec78d6da0295c4f4ef92020ac5a8e57df53f19392e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722259707048079889,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc73c358411265f24a0fdb288ab5434e,},Annotations:map[string]string{io.kubernetes.container.hash: fa230ea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a9167ef54b81a6562000186bed478646762de7fef6329053c2018869987fdda,PodSandboxId:46fd4a1e9c0a6fe5d0cb0f78a6b0ed82fff4a6aeec47ec1a58187cc16e899e57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722259707040230251,Labels:map[string]string{io.kubernetes.container.name: kube
-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdd0eae9701cf7d4013ed5835b6fc65,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7606e1f107d6cded50cb09c9c101a2cac785cbcf697b2ffdecb599d2e148de2a,PodSandboxId:3c847132555037fab395549f498d9f9aad2f651da1470981906bc62a560c615c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722259706969247554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernete
s.pod.name: etcd-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cb06508783f1cdddfbd3cd4c58d73c,},Annotations:map[string]string{io.kubernetes.container.hash: 9edecd9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b12de9ee-92c8-4873-af7c-61c36ced95f0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:35:40 ha-104111 crio[679]: time="2024-07-29 13:35:40.784846571Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d945f0d-530f-40a2-9559-5f98e0b22541 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:35:40 ha-104111 crio[679]: time="2024-07-29 13:35:40.784955287Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d945f0d-530f-40a2-9559-5f98e0b22541 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:35:40 ha-104111 crio[679]: time="2024-07-29 13:35:40.786860163Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a756d2f5-990d-4378-ab6b-43d9db904e2f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:35:40 ha-104111 crio[679]: time="2024-07-29 13:35:40.787326536Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722260140787303570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a756d2f5-990d-4378-ab6b-43d9db904e2f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:35:40 ha-104111 crio[679]: time="2024-07-29 13:35:40.787916586Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e4a3062-57dc-4b5d-a2c6-0dfa4b8a9c99 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:35:40 ha-104111 crio[679]: time="2024-07-29 13:35:40.787989223Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e4a3062-57dc-4b5d-a2c6-0dfa4b8a9c99 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:35:40 ha-104111 crio[679]: time="2024-07-29 13:35:40.788225963Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2a033e8feb22aa74a1673da73b3c5bab08248299304e6a34a7f7064468eeb5d,PodSandboxId:ab591310b6636199927f6aca9abcc3a68cb2149e7f2001e4ffcd7ce08d966de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722259875257833512,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7xsjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fbdab29-6a6d-4b47-8df5-641b9aad98f0,},Annotations:map[string]string{io.kubernetes.container.hash: 769a90,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b86114506804a9a93ef3ca6b2254579d26919db666922aefc5ccef849a81f98,PodSandboxId:53fbe065ff1134e62b3e0364c43521280c8ec8461e8fc752dc336ac259ee602f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722259743370087937,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61cc52e-771b-484a-99d6-8963665cb1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 492a79cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721762ac4017a84454febe3fd71ab6671be9e230d7b785627b43bdafe8478d56,PodSandboxId:d0c4b0845fee9c8c5a409d1d96017b0e56e37a4fb5f685b4e69bc4626c12ffd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259743321034127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9jrnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453ed97-efb4-41c1-8bfb-e7e004e618e0,},Annotations:map[string]string{io.kubernetes.container.hash: 72f497a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81eca3ce5b15d81536c74dc285118c00ec013710df992caad1867b7c5e7f75a1,PodSandboxId:32a1e9c01260e16fc517c9caebdd1556fbbfcaabc5ff83ba679e2ce763d3ee50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259743267402214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gcf7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196981ba-ed16
-427c-ae8b-9b7e8ff36be2,},Annotations:map[string]string{io.kubernetes.container.hash: 4a624f81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcba14c355c5fc17ddee3394c79f5ffaea681079c29edd33b122d7aa80c36f1,PodSandboxId:6b9961791d750fd9d9a7d40cf02ff0c0f6e938c724b2b0787ebeb23a431b9beb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONT
AINER_RUNNING,CreatedAt:1722259731581771295,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9phpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e9c45f-5176-492e-90c7-49b0201afe1e,},Annotations:map[string]string{io.kubernetes.container.hash: 867b7308,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc357136c66b6120efe2eee1197a8d0dabec7279ff50cf8ddea25182b0d4ae8,PodSandboxId:60e3f945e9e89cf01f1a900e939ca51214ea0d79a2a69da731b49606960a6d05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:17222597280
97863200,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6kkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,},Annotations:map[string]string{io.kubernetes.container.hash: 32c1dc3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50fe26dbcca1ab5b9d9fe7412b4950133069510ae36d9795ccd4866be11174cd,PodSandboxId:5433f25fd3565411866feae928f058891d075bb4e913fce385b7e59dd41bdaeb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722259709027
885819,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143501fa8691d69c4a62f32dafe175d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e80af660361f5449de6286725b48cc816a32581672735b35e4ac2c55495983d1,PodSandboxId:a33809de7d3a6efb269fca0ca670a49eb3a11c9845507c3110c8509574ae03e0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722259707029920791,Labels:map[string]string{i
o.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2438b1d75fb1de3aa096517b67661add,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cce61f41e5d4c78880527c2ae828948f376f275ed748786922e82b85a740b3,PodSandboxId:052a439cf2eb6a200a6faaec78d6da0295c4f4ef92020ac5a8e57df53f19392e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722259707048079889,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc73c358411265f24a0fdb288ab5434e,},Annotations:map[string]string{io.kubernetes.container.hash: fa230ea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a9167ef54b81a6562000186bed478646762de7fef6329053c2018869987fdda,PodSandboxId:46fd4a1e9c0a6fe5d0cb0f78a6b0ed82fff4a6aeec47ec1a58187cc16e899e57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722259707040230251,Labels:map[string]string{io.kubernetes.container.name: kube
-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdd0eae9701cf7d4013ed5835b6fc65,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7606e1f107d6cded50cb09c9c101a2cac785cbcf697b2ffdecb599d2e148de2a,PodSandboxId:3c847132555037fab395549f498d9f9aad2f651da1470981906bc62a560c615c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722259706969247554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernete
s.pod.name: etcd-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cb06508783f1cdddfbd3cd4c58d73c,},Annotations:map[string]string{io.kubernetes.container.hash: 9edecd9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e4a3062-57dc-4b5d-a2c6-0dfa4b8a9c99 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:35:40 ha-104111 crio[679]: time="2024-07-29 13:35:40.831878031Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=619c3cfc-5b59-4586-8581-11d089cb55c2 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:35:40 ha-104111 crio[679]: time="2024-07-29 13:35:40.831974245Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=619c3cfc-5b59-4586-8581-11d089cb55c2 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:35:40 ha-104111 crio[679]: time="2024-07-29 13:35:40.833261704Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=95b9b20a-f8fd-41d1-9919-db0200b92d6c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:35:40 ha-104111 crio[679]: time="2024-07-29 13:35:40.833789944Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722260140833765164,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95b9b20a-f8fd-41d1-9919-db0200b92d6c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:35:40 ha-104111 crio[679]: time="2024-07-29 13:35:40.834288773Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=362f842b-d884-45ea-91df-27ee3883525d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:35:40 ha-104111 crio[679]: time="2024-07-29 13:35:40.834370034Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=362f842b-d884-45ea-91df-27ee3883525d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:35:40 ha-104111 crio[679]: time="2024-07-29 13:35:40.834706919Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2a033e8feb22aa74a1673da73b3c5bab08248299304e6a34a7f7064468eeb5d,PodSandboxId:ab591310b6636199927f6aca9abcc3a68cb2149e7f2001e4ffcd7ce08d966de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722259875257833512,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7xsjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fbdab29-6a6d-4b47-8df5-641b9aad98f0,},Annotations:map[string]string{io.kubernetes.container.hash: 769a90,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b86114506804a9a93ef3ca6b2254579d26919db666922aefc5ccef849a81f98,PodSandboxId:53fbe065ff1134e62b3e0364c43521280c8ec8461e8fc752dc336ac259ee602f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722259743370087937,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61cc52e-771b-484a-99d6-8963665cb1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 492a79cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721762ac4017a84454febe3fd71ab6671be9e230d7b785627b43bdafe8478d56,PodSandboxId:d0c4b0845fee9c8c5a409d1d96017b0e56e37a4fb5f685b4e69bc4626c12ffd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259743321034127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9jrnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453ed97-efb4-41c1-8bfb-e7e004e618e0,},Annotations:map[string]string{io.kubernetes.container.hash: 72f497a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81eca3ce5b15d81536c74dc285118c00ec013710df992caad1867b7c5e7f75a1,PodSandboxId:32a1e9c01260e16fc517c9caebdd1556fbbfcaabc5ff83ba679e2ce763d3ee50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259743267402214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gcf7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196981ba-ed16
-427c-ae8b-9b7e8ff36be2,},Annotations:map[string]string{io.kubernetes.container.hash: 4a624f81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcba14c355c5fc17ddee3394c79f5ffaea681079c29edd33b122d7aa80c36f1,PodSandboxId:6b9961791d750fd9d9a7d40cf02ff0c0f6e938c724b2b0787ebeb23a431b9beb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONT
AINER_RUNNING,CreatedAt:1722259731581771295,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9phpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e9c45f-5176-492e-90c7-49b0201afe1e,},Annotations:map[string]string{io.kubernetes.container.hash: 867b7308,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc357136c66b6120efe2eee1197a8d0dabec7279ff50cf8ddea25182b0d4ae8,PodSandboxId:60e3f945e9e89cf01f1a900e939ca51214ea0d79a2a69da731b49606960a6d05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:17222597280
97863200,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6kkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,},Annotations:map[string]string{io.kubernetes.container.hash: 32c1dc3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50fe26dbcca1ab5b9d9fe7412b4950133069510ae36d9795ccd4866be11174cd,PodSandboxId:5433f25fd3565411866feae928f058891d075bb4e913fce385b7e59dd41bdaeb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722259709027
885819,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143501fa8691d69c4a62f32dafe175d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e80af660361f5449de6286725b48cc816a32581672735b35e4ac2c55495983d1,PodSandboxId:a33809de7d3a6efb269fca0ca670a49eb3a11c9845507c3110c8509574ae03e0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722259707029920791,Labels:map[string]string{i
o.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2438b1d75fb1de3aa096517b67661add,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cce61f41e5d4c78880527c2ae828948f376f275ed748786922e82b85a740b3,PodSandboxId:052a439cf2eb6a200a6faaec78d6da0295c4f4ef92020ac5a8e57df53f19392e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722259707048079889,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc73c358411265f24a0fdb288ab5434e,},Annotations:map[string]string{io.kubernetes.container.hash: fa230ea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a9167ef54b81a6562000186bed478646762de7fef6329053c2018869987fdda,PodSandboxId:46fd4a1e9c0a6fe5d0cb0f78a6b0ed82fff4a6aeec47ec1a58187cc16e899e57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722259707040230251,Labels:map[string]string{io.kubernetes.container.name: kube
-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdd0eae9701cf7d4013ed5835b6fc65,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7606e1f107d6cded50cb09c9c101a2cac785cbcf697b2ffdecb599d2e148de2a,PodSandboxId:3c847132555037fab395549f498d9f9aad2f651da1470981906bc62a560c615c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722259706969247554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernete
s.pod.name: etcd-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cb06508783f1cdddfbd3cd4c58d73c,},Annotations:map[string]string{io.kubernetes.container.hash: 9edecd9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=362f842b-d884-45ea-91df-27ee3883525d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:35:40 ha-104111 crio[679]: time="2024-07-29 13:35:40.876791955Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0052c26d-d491-4f11-bfea-1c815fd396e3 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:35:40 ha-104111 crio[679]: time="2024-07-29 13:35:40.876866816Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0052c26d-d491-4f11-bfea-1c815fd396e3 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:35:40 ha-104111 crio[679]: time="2024-07-29 13:35:40.878007294Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cf2b8aa2-83be-4cbb-a5d2-b87030bd77db name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:35:40 ha-104111 crio[679]: time="2024-07-29 13:35:40.878456587Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722260140878433900,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf2b8aa2-83be-4cbb-a5d2-b87030bd77db name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:35:40 ha-104111 crio[679]: time="2024-07-29 13:35:40.879227494Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6cc6057d-e428-4fca-bed0-dd98202d5f71 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:35:40 ha-104111 crio[679]: time="2024-07-29 13:35:40.879286946Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6cc6057d-e428-4fca-bed0-dd98202d5f71 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:35:40 ha-104111 crio[679]: time="2024-07-29 13:35:40.879514118Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2a033e8feb22aa74a1673da73b3c5bab08248299304e6a34a7f7064468eeb5d,PodSandboxId:ab591310b6636199927f6aca9abcc3a68cb2149e7f2001e4ffcd7ce08d966de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722259875257833512,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7xsjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fbdab29-6a6d-4b47-8df5-641b9aad98f0,},Annotations:map[string]string{io.kubernetes.container.hash: 769a90,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b86114506804a9a93ef3ca6b2254579d26919db666922aefc5ccef849a81f98,PodSandboxId:53fbe065ff1134e62b3e0364c43521280c8ec8461e8fc752dc336ac259ee602f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722259743370087937,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61cc52e-771b-484a-99d6-8963665cb1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 492a79cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721762ac4017a84454febe3fd71ab6671be9e230d7b785627b43bdafe8478d56,PodSandboxId:d0c4b0845fee9c8c5a409d1d96017b0e56e37a4fb5f685b4e69bc4626c12ffd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259743321034127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9jrnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453ed97-efb4-41c1-8bfb-e7e004e618e0,},Annotations:map[string]string{io.kubernetes.container.hash: 72f497a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81eca3ce5b15d81536c74dc285118c00ec013710df992caad1867b7c5e7f75a1,PodSandboxId:32a1e9c01260e16fc517c9caebdd1556fbbfcaabc5ff83ba679e2ce763d3ee50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722259743267402214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gcf7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196981ba-ed16
-427c-ae8b-9b7e8ff36be2,},Annotations:map[string]string{io.kubernetes.container.hash: 4a624f81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcba14c355c5fc17ddee3394c79f5ffaea681079c29edd33b122d7aa80c36f1,PodSandboxId:6b9961791d750fd9d9a7d40cf02ff0c0f6e938c724b2b0787ebeb23a431b9beb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONT
AINER_RUNNING,CreatedAt:1722259731581771295,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9phpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e9c45f-5176-492e-90c7-49b0201afe1e,},Annotations:map[string]string{io.kubernetes.container.hash: 867b7308,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc357136c66b6120efe2eee1197a8d0dabec7279ff50cf8ddea25182b0d4ae8,PodSandboxId:60e3f945e9e89cf01f1a900e939ca51214ea0d79a2a69da731b49606960a6d05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:17222597280
97863200,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6kkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,},Annotations:map[string]string{io.kubernetes.container.hash: 32c1dc3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50fe26dbcca1ab5b9d9fe7412b4950133069510ae36d9795ccd4866be11174cd,PodSandboxId:5433f25fd3565411866feae928f058891d075bb4e913fce385b7e59dd41bdaeb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722259709027
885819,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143501fa8691d69c4a62f32dafe175d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e80af660361f5449de6286725b48cc816a32581672735b35e4ac2c55495983d1,PodSandboxId:a33809de7d3a6efb269fca0ca670a49eb3a11c9845507c3110c8509574ae03e0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722259707029920791,Labels:map[string]string{i
o.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2438b1d75fb1de3aa096517b67661add,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cce61f41e5d4c78880527c2ae828948f376f275ed748786922e82b85a740b3,PodSandboxId:052a439cf2eb6a200a6faaec78d6da0295c4f4ef92020ac5a8e57df53f19392e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722259707048079889,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc73c358411265f24a0fdb288ab5434e,},Annotations:map[string]string{io.kubernetes.container.hash: fa230ea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a9167ef54b81a6562000186bed478646762de7fef6329053c2018869987fdda,PodSandboxId:46fd4a1e9c0a6fe5d0cb0f78a6b0ed82fff4a6aeec47ec1a58187cc16e899e57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722259707040230251,Labels:map[string]string{io.kubernetes.container.name: kube
-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdd0eae9701cf7d4013ed5835b6fc65,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7606e1f107d6cded50cb09c9c101a2cac785cbcf697b2ffdecb599d2e148de2a,PodSandboxId:3c847132555037fab395549f498d9f9aad2f651da1470981906bc62a560c615c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722259706969247554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernete
s.pod.name: etcd-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cb06508783f1cdddfbd3cd4c58d73c,},Annotations:map[string]string{io.kubernetes.container.hash: 9edecd9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6cc6057d-e428-4fca-bed0-dd98202d5f71 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d2a033e8feb22       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   ab591310b6636       busybox-fc5497c4f-7xsjn
	1b86114506804       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   53fbe065ff113       storage-provisioner
	721762ac4017a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   d0c4b0845fee9       coredns-7db6d8ff4d-9jrnl
	81eca3ce5b15d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   32a1e9c01260e       coredns-7db6d8ff4d-gcf7q
	8fcba14c355c5       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    6 minutes ago       Running             kindnet-cni               0                   6b9961791d750       kindnet-9phpm
	6bc357136c66b       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      6 minutes ago       Running             kube-proxy                0                   60e3f945e9e89       kube-proxy-n6kkf
	50fe26dbcca1a       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   5433f25fd3565       kube-vip-ha-104111
	e4cce61f41e5d       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   052a439cf2eb6       kube-apiserver-ha-104111
	8a9167ef54b81       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   46fd4a1e9c0a6       kube-controller-manager-ha-104111
	e80af660361f5       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   a33809de7d3a6       kube-scheduler-ha-104111
	7606e1f107d6c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   3c84713255503       etcd-ha-104111
	
	
	==> coredns [721762ac4017a84454febe3fd71ab6671be9e230d7b785627b43bdafe8478d56] <==
	[INFO] 10.244.0.4:51396 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003490725s
	[INFO] 10.244.0.4:37443 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000227588s
	[INFO] 10.244.0.4:33041 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000214614s
	[INFO] 10.244.2.2:60214 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209682s
	[INFO] 10.244.2.2:35659 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00147984s
	[INFO] 10.244.2.2:53135 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000226257s
	[INFO] 10.244.2.2:49731 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189094s
	[INFO] 10.244.2.2:47456 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130859s
	[INFO] 10.244.2.2:41111 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123604s
	[INFO] 10.244.1.2:55083 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114636s
	[INFO] 10.244.1.2:48422 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00109487s
	[INFO] 10.244.0.4:39213 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116126s
	[INFO] 10.244.0.4:33260 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068728s
	[INFO] 10.244.2.2:48083 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166018s
	[INFO] 10.244.2.2:58646 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172185s
	[INFO] 10.244.2.2:35393 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009321s
	[INFO] 10.244.1.2:57222 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116426s
	[INFO] 10.244.0.4:60530 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165705s
	[INFO] 10.244.0.4:35848 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000187393s
	[INFO] 10.244.0.4:34740 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000104846s
	[INFO] 10.244.2.2:55008 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000235338s
	[INFO] 10.244.2.2:47084 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000152504s
	[INFO] 10.244.2.2:39329 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000115623s
	[INFO] 10.244.1.2:57485 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001155s
	[INFO] 10.244.1.2:42349 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000100298s
	
	
	==> coredns [81eca3ce5b15d81536c74dc285118c00ec013710df992caad1867b7c5e7f75a1] <==
	[INFO] 10.244.2.2:37347 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001498018s
	[INFO] 10.244.1.2:37481 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000448308s
	[INFO] 10.244.0.4:51964 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000090052s
	[INFO] 10.244.0.4:47886 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000124742s
	[INFO] 10.244.0.4:34248 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.008049906s
	[INFO] 10.244.0.4:59749 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102743s
	[INFO] 10.244.0.4:46792 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124933s
	[INFO] 10.244.2.2:34901 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000159776s
	[INFO] 10.244.2.2:53333 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001076187s
	[INFO] 10.244.1.2:57672 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002003185s
	[INFO] 10.244.1.2:53227 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161629s
	[INFO] 10.244.1.2:38444 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092353s
	[INFO] 10.244.1.2:56499 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000211011s
	[INFO] 10.244.1.2:57556 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068457s
	[INFO] 10.244.1.2:34023 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109815s
	[INFO] 10.244.0.4:40329 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111231s
	[INFO] 10.244.0.4:38637 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005437s
	[INFO] 10.244.2.2:36810 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104372s
	[INFO] 10.244.1.2:53024 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122232s
	[INFO] 10.244.1.2:40257 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148245s
	[INFO] 10.244.1.2:41500 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080394s
	[INFO] 10.244.0.4:48915 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000151276s
	[INFO] 10.244.2.2:60231 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001284s
	[INFO] 10.244.1.2:33829 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154134s
	[INFO] 10.244.1.2:57945 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123045s
	
	
	==> describe nodes <==
	Name:               ha-104111
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-104111
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=ha-104111
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T13_28_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:28:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-104111
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:35:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:31:37 +0000   Mon, 29 Jul 2024 13:28:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:31:37 +0000   Mon, 29 Jul 2024 13:28:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:31:37 +0000   Mon, 29 Jul 2024 13:28:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:31:37 +0000   Mon, 29 Jul 2024 13:29:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.120
	  Hostname:    ha-104111
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 613eb8d959344be3989ec50055edd8a7
	  System UUID:                613eb8d9-5934-4be3-989e-c50055edd8a7
	  Boot ID:                    5cf31ff2-8a2f-47f5-8440-f13293b7049d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7xsjn              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 coredns-7db6d8ff4d-9jrnl             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m55s
	  kube-system                 coredns-7db6d8ff4d-gcf7q             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m55s
	  kube-system                 etcd-ha-104111                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m8s
	  kube-system                 kindnet-9phpm                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m55s
	  kube-system                 kube-apiserver-ha-104111             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m8s
	  kube-system                 kube-controller-manager-ha-104111    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m10s
	  kube-system                 kube-proxy-n6kkf                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m55s
	  kube-system                 kube-scheduler-ha-104111             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m8s
	  kube-system                 kube-vip-ha-104111                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m8s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m52s                  kube-proxy       
	  Normal  NodeHasSufficientPID     7m15s (x7 over 7m15s)  kubelet          Node ha-104111 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m15s (x8 over 7m15s)  kubelet          Node ha-104111 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m15s (x8 over 7m15s)  kubelet          Node ha-104111 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m8s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m8s                   kubelet          Node ha-104111 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m8s                   kubelet          Node ha-104111 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m8s                   kubelet          Node ha-104111 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m56s                  node-controller  Node ha-104111 event: Registered Node ha-104111 in Controller
	  Normal  NodeReady                6m39s                  kubelet          Node ha-104111 status is now: NodeReady
	  Normal  RegisteredNode           5m50s                  node-controller  Node ha-104111 event: Registered Node ha-104111 in Controller
	  Normal  RegisteredNode           4m35s                  node-controller  Node ha-104111 event: Registered Node ha-104111 in Controller
	
	
	Name:               ha-104111-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-104111-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=ha-104111
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T13_29_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:29:32 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-104111-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:32:26 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 13:31:36 +0000   Mon, 29 Jul 2024 13:33:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 13:31:36 +0000   Mon, 29 Jul 2024 13:33:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 13:31:36 +0000   Mon, 29 Jul 2024 13:33:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 13:31:36 +0000   Mon, 29 Jul 2024 13:33:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.140
	  Hostname:    ha-104111-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0636dc68c5464326baedc11fd97131b2
	  System UUID:                0636dc68-c546-4326-baed-c11fd97131b2
	  Boot ID:                    0bd45770-0fe8-46cb-acfe-7c6dd18b1400
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-sf8mb                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 etcd-ha-104111-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m7s
	  kube-system                 kindnet-njndz                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m8s
	  kube-system                 kube-apiserver-ha-104111-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m7s
	  kube-system                 kube-controller-manager-ha-104111-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	  kube-system                 kube-proxy-5dnvv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m8s
	  kube-system                 kube-scheduler-ha-104111-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m59s
	  kube-system                 kube-vip-ha-104111-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 6m3s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  6m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m8s (x8 over 6m9s)  kubelet          Node ha-104111-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m8s (x8 over 6m9s)  kubelet          Node ha-104111-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m8s (x7 over 6m9s)  kubelet          Node ha-104111-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m6s                 node-controller  Node ha-104111-m02 event: Registered Node ha-104111-m02 in Controller
	  Normal  RegisteredNode           5m50s                node-controller  Node ha-104111-m02 event: Registered Node ha-104111-m02 in Controller
	  Normal  RegisteredNode           4m35s                node-controller  Node ha-104111-m02 event: Registered Node ha-104111-m02 in Controller
	  Normal  NodeNotReady             2m31s                node-controller  Node ha-104111-m02 status is now: NodeNotReady
	
	
	Name:               ha-104111-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-104111-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=ha-104111
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T13_30_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:30:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-104111-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:35:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:31:17 +0000   Mon, 29 Jul 2024 13:30:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:31:17 +0000   Mon, 29 Jul 2024 13:30:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:31:17 +0000   Mon, 29 Jul 2024 13:30:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:31:17 +0000   Mon, 29 Jul 2024 13:31:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.202
	  Hostname:    ha-104111-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 60c8ff09952242e0a709074b86dabf4c
	  System UUID:                60c8ff09-9522-42e0-a709-074b86dabf4c
	  Boot ID:                    fc349b62-d8b8-486e-8c1c-4a831212a0da
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cbdn4                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 etcd-ha-104111-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m53s
	  kube-system                 kindnet-mt9dk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m55s
	  kube-system                 kube-apiserver-ha-104111-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-controller-manager-ha-104111-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-proxy-m765x                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  kube-system                 kube-scheduler-ha-104111-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-vip-ha-104111-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m55s (x8 over 4m55s)  kubelet          Node ha-104111-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m55s (x8 over 4m55s)  kubelet          Node ha-104111-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m55s (x7 over 4m55s)  kubelet          Node ha-104111-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m51s                  node-controller  Node ha-104111-m03 event: Registered Node ha-104111-m03 in Controller
	  Normal  RegisteredNode           4m50s                  node-controller  Node ha-104111-m03 event: Registered Node ha-104111-m03 in Controller
	  Normal  RegisteredNode           4m35s                  node-controller  Node ha-104111-m03 event: Registered Node ha-104111-m03 in Controller
	
	
	Name:               ha-104111-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-104111-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=ha-104111
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T13_31_48_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:31:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-104111-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:35:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:32:18 +0000   Mon, 29 Jul 2024 13:31:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:32:18 +0000   Mon, 29 Jul 2024 13:31:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:32:18 +0000   Mon, 29 Jul 2024 13:31:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:32:18 +0000   Mon, 29 Jul 2024 13:32:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.40
	  Hostname:    ha-104111-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f6a0723aec74e89b187376957d3127c
	  System UUID:                4f6a0723-aec7-4e89-b187-376957d3127c
	  Boot ID:                    c3e20e4f-136a-4900-a21d-f31b613ea791
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fbnbc       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m54s
	  kube-system                 kube-proxy-cmtgm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m54s (x2 over 3m54s)  kubelet          Node ha-104111-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s (x2 over 3m54s)  kubelet          Node ha-104111-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s (x2 over 3m54s)  kubelet          Node ha-104111-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-104111-m04 event: Registered Node ha-104111-m04 in Controller
	  Normal  RegisteredNode           3m50s                  node-controller  Node ha-104111-m04 event: Registered Node ha-104111-m04 in Controller
	  Normal  RegisteredNode           3m50s                  node-controller  Node ha-104111-m04 event: Registered Node ha-104111-m04 in Controller
	  Normal  NodeReady                3m35s                  kubelet          Node ha-104111-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul29 13:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050265] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039095] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.731082] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jul29 13:28] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.580943] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.840145] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.057324] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056430] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.166263] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.131459] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.268832] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.161114] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +3.971481] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.058723] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.299885] systemd-fstab-generator[1354]: Ignoring "noauto" option for root device
	[  +0.095957] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.972930] kauditd_printk_skb: 21 callbacks suppressed
	[Jul29 13:29] kauditd_printk_skb: 38 callbacks suppressed
	[ +36.765451] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [7606e1f107d6cded50cb09c9c101a2cac785cbcf697b2ffdecb599d2e148de2a] <==
	{"level":"warn","ts":"2024-07-29T13:35:41.143422Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:35:41.148897Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:35:41.15232Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:35:41.153715Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:35:41.166377Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:35:41.17308Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:35:41.179738Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:35:41.183854Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:35:41.187801Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:35:41.195916Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:35:41.19882Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:35:41.2049Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:35:41.214459Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:35:41.220165Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:35:41.224488Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:35:41.237187Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:35:41.252724Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:35:41.253643Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:35:41.2608Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:35:41.263861Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:35:41.266856Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:35:41.273195Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:35:41.287191Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:35:41.293065Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:35:41.354641Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 13:35:41 up 7 min,  0 users,  load average: 0.31, 0.53, 0.29
	Linux ha-104111 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8fcba14c355c5fc17ddee3394c79f5ffaea681079c29edd33b122d7aa80c36f1] <==
	I0729 13:35:02.635863       1 main.go:322] Node ha-104111-m04 has CIDR [10.244.3.0/24] 
	I0729 13:35:12.632622       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0729 13:35:12.632765       1 main.go:299] handling current node
	I0729 13:35:12.632807       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 13:35:12.632844       1 main.go:322] Node ha-104111-m02 has CIDR [10.244.1.0/24] 
	I0729 13:35:12.633065       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0729 13:35:12.633089       1 main.go:322] Node ha-104111-m03 has CIDR [10.244.2.0/24] 
	I0729 13:35:12.633153       1 main.go:295] Handling node with IPs: map[192.168.39.40:{}]
	I0729 13:35:12.633183       1 main.go:322] Node ha-104111-m04 has CIDR [10.244.3.0/24] 
	I0729 13:35:22.627291       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0729 13:35:22.627474       1 main.go:299] handling current node
	I0729 13:35:22.627517       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 13:35:22.627603       1 main.go:322] Node ha-104111-m02 has CIDR [10.244.1.0/24] 
	I0729 13:35:22.627878       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0729 13:35:22.627922       1 main.go:322] Node ha-104111-m03 has CIDR [10.244.2.0/24] 
	I0729 13:35:22.628014       1 main.go:295] Handling node with IPs: map[192.168.39.40:{}]
	I0729 13:35:22.628042       1 main.go:322] Node ha-104111-m04 has CIDR [10.244.3.0/24] 
	I0729 13:35:32.630418       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 13:35:32.630461       1 main.go:322] Node ha-104111-m02 has CIDR [10.244.1.0/24] 
	I0729 13:35:32.630651       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0729 13:35:32.630678       1 main.go:322] Node ha-104111-m03 has CIDR [10.244.2.0/24] 
	I0729 13:35:32.630734       1 main.go:295] Handling node with IPs: map[192.168.39.40:{}]
	I0729 13:35:32.630753       1 main.go:322] Node ha-104111-m04 has CIDR [10.244.3.0/24] 
	I0729 13:35:32.630816       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0729 13:35:32.630836       1 main.go:299] handling current node
	
	
	==> kube-apiserver [e4cce61f41e5d4c78880527c2ae828948f376f275ed748786922e82b85a740b3] <==
	I0729 13:28:31.879445       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0729 13:28:31.886487       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.120]
	I0729 13:28:31.887810       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 13:28:31.893218       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 13:28:32.148356       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 13:28:33.305261       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 13:28:33.328420       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0729 13:28:33.360296       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 13:28:46.103373       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0729 13:28:46.434987       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0729 13:31:17.232046       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43602: use of closed network connection
	E0729 13:31:17.428737       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43610: use of closed network connection
	E0729 13:31:17.614987       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43622: use of closed network connection
	E0729 13:31:17.800465       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43638: use of closed network connection
	E0729 13:31:18.002027       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43648: use of closed network connection
	E0729 13:31:18.196945       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43666: use of closed network connection
	E0729 13:31:18.375757       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43672: use of closed network connection
	E0729 13:31:18.572267       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43688: use of closed network connection
	E0729 13:31:18.747369       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43704: use of closed network connection
	E0729 13:31:19.036524       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43720: use of closed network connection
	E0729 13:31:19.223066       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43734: use of closed network connection
	E0729 13:31:19.395856       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43746: use of closed network connection
	E0729 13:31:19.572154       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43764: use of closed network connection
	E0729 13:31:19.751874       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47734: use of closed network connection
	W0729 13:32:41.896160       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.120 192.168.39.202]
	
	
	==> kube-controller-manager [8a9167ef54b81a6562000186bed478646762de7fef6329053c2018869987fdda] <==
	I0729 13:30:46.832484       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-104111-m03" podCIDRs=["10.244.2.0/24"]
	I0729 13:30:50.424699       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-104111-m03"
	I0729 13:31:13.771690       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.903666ms"
	I0729 13:31:13.805615       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.470887ms"
	I0729 13:31:13.805737       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.891µs"
	I0729 13:31:13.951864       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="117.326641ms"
	I0729 13:31:14.138975       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="186.965426ms"
	I0729 13:31:14.139190       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.694µs"
	I0729 13:31:14.244057       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.90239ms"
	I0729 13:31:14.244328       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="171.196µs"
	I0729 13:31:15.232746       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.393µs"
	I0729 13:31:15.998647       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.868229ms"
	I0729 13:31:15.998867       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.957µs"
	I0729 13:31:16.138932       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.195922ms"
	I0729 13:31:16.139066       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.833µs"
	I0729 13:31:16.764015       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.067893ms"
	I0729 13:31:16.764302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="115.513µs"
	E0729 13:31:47.430090       1 certificate_controller.go:146] Sync csr-p7x6f failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-p7x6f": the object has been modified; please apply your changes to the latest version and try again
	I0729 13:31:47.708092       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-104111-m04\" does not exist"
	I0729 13:31:47.754896       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-104111-m04" podCIDRs=["10.244.3.0/24"]
	I0729 13:31:50.456726       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-104111-m04"
	I0729 13:32:06.415875       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-104111-m04"
	I0729 13:33:10.502660       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-104111-m04"
	I0729 13:33:10.685858       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.213158ms"
	I0729 13:33:10.686142       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="133.532µs"
	
	
	==> kube-proxy [6bc357136c66b6120efe2eee1197a8d0dabec7279ff50cf8ddea25182b0d4ae8] <==
	I0729 13:28:48.318203       1 server_linux.go:69] "Using iptables proxy"
	I0729 13:28:48.335952       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.120"]
	I0729 13:28:48.436864       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 13:28:48.436915       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 13:28:48.437047       1 server_linux.go:165] "Using iptables Proxier"
	I0729 13:28:48.441369       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 13:28:48.441837       1 server.go:872] "Version info" version="v1.30.3"
	I0729 13:28:48.441853       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:28:48.443654       1 config.go:101] "Starting endpoint slice config controller"
	I0729 13:28:48.444098       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 13:28:48.444247       1 config.go:192] "Starting service config controller"
	I0729 13:28:48.444277       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 13:28:48.445332       1 config.go:319] "Starting node config controller"
	I0729 13:28:48.445542       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 13:28:48.545120       1 shared_informer.go:320] Caches are synced for service config
	I0729 13:28:48.545236       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 13:28:48.549600       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e80af660361f5449de6286725b48cc816a32581672735b35e4ac2c55495983d1] <==
	W0729 13:28:31.143239       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 13:28:31.143337       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 13:28:31.190121       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 13:28:31.190215       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 13:28:31.190453       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 13:28:31.190497       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 13:28:31.215678       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 13:28:31.216250       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 13:28:31.420427       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 13:28:31.420483       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 13:28:31.494455       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 13:28:31.494504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0729 13:28:33.383078       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 13:30:46.961298       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mvvrc\": pod kindnet-mvvrc is already assigned to node \"ha-104111-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-mvvrc" node="ha-104111-m03"
	E0729 13:30:46.962268       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 2166a71d-cd5a-4e07-b827-a4789ca6b3c5(kube-system/kindnet-mvvrc) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-mvvrc"
	E0729 13:30:46.962360       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mvvrc\": pod kindnet-mvvrc is already assigned to node \"ha-104111-m03\"" pod="kube-system/kindnet-mvvrc"
	I0729 13:30:46.962417       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mvvrc" node="ha-104111-m03"
	E0729 13:30:46.969946       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-sdksp\": pod kube-proxy-sdksp is already assigned to node \"ha-104111-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-sdksp" node="ha-104111-m03"
	E0729 13:30:46.970015       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 6018426d-cebb-4c86-a261-c760ae46b755(kube-system/kube-proxy-sdksp) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-sdksp"
	E0729 13:30:46.970035       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-sdksp\": pod kube-proxy-sdksp is already assigned to node \"ha-104111-m03\"" pod="kube-system/kube-proxy-sdksp"
	I0729 13:30:46.970053       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-sdksp" node="ha-104111-m03"
	E0729 13:31:47.773124       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-fbnbc\": pod kindnet-fbnbc is already assigned to node \"ha-104111-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-fbnbc" node="ha-104111-m04"
	E0729 13:31:47.773246       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-fbnbc\": pod kindnet-fbnbc is already assigned to node \"ha-104111-m04\"" pod="kube-system/kindnet-fbnbc"
	E0729 13:31:47.773666       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-cmtgm\": pod kube-proxy-cmtgm is already assigned to node \"ha-104111-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-cmtgm" node="ha-104111-m04"
	E0729 13:31:47.773724       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-cmtgm\": pod kube-proxy-cmtgm is already assigned to node \"ha-104111-m04\"" pod="kube-system/kube-proxy-cmtgm"
	
	
	==> kubelet <==
	Jul 29 13:31:33 ha-104111 kubelet[1361]: E0729 13:31:33.264239    1361 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:31:33 ha-104111 kubelet[1361]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:31:33 ha-104111 kubelet[1361]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:31:33 ha-104111 kubelet[1361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:31:33 ha-104111 kubelet[1361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:32:33 ha-104111 kubelet[1361]: E0729 13:32:33.269290    1361 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:32:33 ha-104111 kubelet[1361]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:32:33 ha-104111 kubelet[1361]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:32:33 ha-104111 kubelet[1361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:32:33 ha-104111 kubelet[1361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:33:33 ha-104111 kubelet[1361]: E0729 13:33:33.265694    1361 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:33:33 ha-104111 kubelet[1361]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:33:33 ha-104111 kubelet[1361]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:33:33 ha-104111 kubelet[1361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:33:33 ha-104111 kubelet[1361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:34:33 ha-104111 kubelet[1361]: E0729 13:34:33.264440    1361 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:34:33 ha-104111 kubelet[1361]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:34:33 ha-104111 kubelet[1361]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:34:33 ha-104111 kubelet[1361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:34:33 ha-104111 kubelet[1361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:35:33 ha-104111 kubelet[1361]: E0729 13:35:33.265664    1361 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:35:33 ha-104111 kubelet[1361]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:35:33 ha-104111 kubelet[1361]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:35:33 ha-104111 kubelet[1361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:35:33 ha-104111 kubelet[1361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-104111 -n ha-104111
helpers_test.go:261: (dbg) Run:  kubectl --context ha-104111 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (50.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (408.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-104111 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-104111 -v=7 --alsologtostderr
E0729 13:37:06.666080  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
E0729 13:37:34.352446  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-104111 -v=7 --alsologtostderr: exit status 82 (2m1.823479941s)

                                                
                                                
-- stdout --
	* Stopping node "ha-104111-m04"  ...
	* Stopping node "ha-104111-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:35:42.733247  998559 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:35:42.733356  998559 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:35:42.733364  998559 out.go:304] Setting ErrFile to fd 2...
	I0729 13:35:42.733368  998559 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:35:42.733529  998559 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 13:35:42.733767  998559 out.go:298] Setting JSON to false
	I0729 13:35:42.733850  998559 mustload.go:65] Loading cluster: ha-104111
	I0729 13:35:42.734224  998559 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:35:42.734311  998559 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/config.json ...
	I0729 13:35:42.734479  998559 mustload.go:65] Loading cluster: ha-104111
	I0729 13:35:42.734602  998559 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:35:42.734645  998559 stop.go:39] StopHost: ha-104111-m04
	I0729 13:35:42.735051  998559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:42.735089  998559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:42.750873  998559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34517
	I0729 13:35:42.751342  998559 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:42.751898  998559 main.go:141] libmachine: Using API Version  1
	I0729 13:35:42.751922  998559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:42.752293  998559 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:42.754808  998559 out.go:177] * Stopping node "ha-104111-m04"  ...
	I0729 13:35:42.756068  998559 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 13:35:42.756105  998559 main.go:141] libmachine: (ha-104111-m04) Calling .DriverName
	I0729 13:35:42.756330  998559 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 13:35:42.756358  998559 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHHostname
	I0729 13:35:42.759085  998559 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:35:42.759534  998559 main.go:141] libmachine: (ha-104111-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:31:bf", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:31:34 +0000 UTC Type:0 Mac:52:54:00:c2:31:bf Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-104111-m04 Clientid:01:52:54:00:c2:31:bf}
	I0729 13:35:42.759562  998559 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:35:42.759747  998559 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHPort
	I0729 13:35:42.759952  998559 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHKeyPath
	I0729 13:35:42.760118  998559 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHUsername
	I0729 13:35:42.760260  998559 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m04/id_rsa Username:docker}
	I0729 13:35:42.843825  998559 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 13:35:42.897046  998559 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 13:35:42.951263  998559 main.go:141] libmachine: Stopping "ha-104111-m04"...
	I0729 13:35:42.951320  998559 main.go:141] libmachine: (ha-104111-m04) Calling .GetState
	I0729 13:35:42.952858  998559 main.go:141] libmachine: (ha-104111-m04) Calling .Stop
	I0729 13:35:42.956582  998559 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 0/120
	I0729 13:35:44.091045  998559 main.go:141] libmachine: (ha-104111-m04) Calling .GetState
	I0729 13:35:44.092443  998559 main.go:141] libmachine: Machine "ha-104111-m04" was stopped.
	I0729 13:35:44.092466  998559 stop.go:75] duration metric: took 1.33639769s to stop
	I0729 13:35:44.092490  998559 stop.go:39] StopHost: ha-104111-m03
	I0729 13:35:44.092916  998559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:35:44.092967  998559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:35:44.108153  998559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39945
	I0729 13:35:44.108706  998559 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:35:44.109235  998559 main.go:141] libmachine: Using API Version  1
	I0729 13:35:44.109267  998559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:35:44.109606  998559 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:35:44.112250  998559 out.go:177] * Stopping node "ha-104111-m03"  ...
	I0729 13:35:44.113439  998559 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 13:35:44.113472  998559 main.go:141] libmachine: (ha-104111-m03) Calling .DriverName
	I0729 13:35:44.113688  998559 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 13:35:44.113715  998559 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHHostname
	I0729 13:35:44.116636  998559 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:35:44.117087  998559 main.go:141] libmachine: (ha-104111-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:86:be", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:30:10 +0000 UTC Type:0 Mac:52:54:00:4a:86:be Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-104111-m03 Clientid:01:52:54:00:4a:86:be}
	I0729 13:35:44.117107  998559 main.go:141] libmachine: (ha-104111-m03) DBG | domain ha-104111-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:4a:86:be in network mk-ha-104111
	I0729 13:35:44.117245  998559 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHPort
	I0729 13:35:44.117421  998559 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHKeyPath
	I0729 13:35:44.117600  998559 main.go:141] libmachine: (ha-104111-m03) Calling .GetSSHUsername
	I0729 13:35:44.117703  998559 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m03/id_rsa Username:docker}
	I0729 13:35:44.199696  998559 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 13:35:44.254255  998559 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 13:35:44.308589  998559 main.go:141] libmachine: Stopping "ha-104111-m03"...
	I0729 13:35:44.308624  998559 main.go:141] libmachine: (ha-104111-m03) Calling .GetState
	I0729 13:35:44.310274  998559 main.go:141] libmachine: (ha-104111-m03) Calling .Stop
	I0729 13:35:44.314025  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 0/120
	I0729 13:35:45.315449  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 1/120
	I0729 13:35:46.316919  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 2/120
	I0729 13:35:47.318460  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 3/120
	I0729 13:35:48.320893  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 4/120
	I0729 13:35:49.323255  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 5/120
	I0729 13:35:50.324369  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 6/120
	I0729 13:35:51.325683  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 7/120
	I0729 13:35:52.327390  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 8/120
	I0729 13:35:53.328856  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 9/120
	I0729 13:35:54.331184  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 10/120
	I0729 13:35:55.332553  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 11/120
	I0729 13:35:56.333702  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 12/120
	I0729 13:35:57.335082  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 13/120
	I0729 13:35:58.336319  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 14/120
	I0729 13:35:59.338119  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 15/120
	I0729 13:36:00.339689  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 16/120
	I0729 13:36:01.341303  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 17/120
	I0729 13:36:02.342756  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 18/120
	I0729 13:36:03.344421  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 19/120
	I0729 13:36:04.346578  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 20/120
	I0729 13:36:05.348048  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 21/120
	I0729 13:36:06.349615  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 22/120
	I0729 13:36:07.350951  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 23/120
	I0729 13:36:08.352303  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 24/120
	I0729 13:36:09.354128  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 25/120
	I0729 13:36:10.355523  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 26/120
	I0729 13:36:11.357013  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 27/120
	I0729 13:36:12.358473  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 28/120
	I0729 13:36:13.359780  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 29/120
	I0729 13:36:14.361600  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 30/120
	I0729 13:36:15.362924  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 31/120
	I0729 13:36:16.364431  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 32/120
	I0729 13:36:17.365818  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 33/120
	I0729 13:36:18.367055  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 34/120
	I0729 13:36:19.368726  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 35/120
	I0729 13:36:20.370680  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 36/120
	I0729 13:36:21.372456  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 37/120
	I0729 13:36:22.373771  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 38/120
	I0729 13:36:23.375136  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 39/120
	I0729 13:36:24.377245  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 40/120
	I0729 13:36:25.378609  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 41/120
	I0729 13:36:26.379969  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 42/120
	I0729 13:36:27.381368  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 43/120
	I0729 13:36:28.382754  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 44/120
	I0729 13:36:29.385065  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 45/120
	I0729 13:36:30.386458  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 46/120
	I0729 13:36:31.387665  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 47/120
	I0729 13:36:32.389108  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 48/120
	I0729 13:36:33.390936  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 49/120
	I0729 13:36:34.392699  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 50/120
	I0729 13:36:35.394825  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 51/120
	I0729 13:36:36.396371  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 52/120
	I0729 13:36:37.397630  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 53/120
	I0729 13:36:38.399019  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 54/120
	I0729 13:36:39.400764  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 55/120
	I0729 13:36:40.402033  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 56/120
	I0729 13:36:41.403409  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 57/120
	I0729 13:36:42.404627  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 58/120
	I0729 13:36:43.406937  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 59/120
	I0729 13:36:44.408632  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 60/120
	I0729 13:36:45.410842  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 61/120
	I0729 13:36:46.412199  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 62/120
	I0729 13:36:47.413533  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 63/120
	I0729 13:36:48.415051  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 64/120
	I0729 13:36:49.416911  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 65/120
	I0729 13:36:50.418392  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 66/120
	I0729 13:36:51.419733  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 67/120
	I0729 13:36:52.421009  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 68/120
	I0729 13:36:53.422949  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 69/120
	I0729 13:36:54.424853  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 70/120
	I0729 13:36:55.426285  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 71/120
	I0729 13:36:56.427592  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 72/120
	I0729 13:36:57.428842  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 73/120
	I0729 13:36:58.430938  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 74/120
	I0729 13:36:59.432682  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 75/120
	I0729 13:37:00.434067  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 76/120
	I0729 13:37:01.435317  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 77/120
	I0729 13:37:02.437106  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 78/120
	I0729 13:37:03.438517  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 79/120
	I0729 13:37:04.440437  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 80/120
	I0729 13:37:05.442105  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 81/120
	I0729 13:37:06.443518  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 82/120
	I0729 13:37:07.445380  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 83/120
	I0729 13:37:08.446752  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 84/120
	I0729 13:37:09.448946  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 85/120
	I0729 13:37:10.450624  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 86/120
	I0729 13:37:11.452247  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 87/120
	I0729 13:37:12.454367  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 88/120
	I0729 13:37:13.455690  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 89/120
	I0729 13:37:14.457462  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 90/120
	I0729 13:37:15.459829  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 91/120
	I0729 13:37:16.461443  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 92/120
	I0729 13:37:17.462925  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 93/120
	I0729 13:37:18.464077  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 94/120
	I0729 13:37:19.465653  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 95/120
	I0729 13:37:20.467112  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 96/120
	I0729 13:37:21.468267  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 97/120
	I0729 13:37:22.469682  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 98/120
	I0729 13:37:23.471015  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 99/120
	I0729 13:37:24.472679  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 100/120
	I0729 13:37:25.474059  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 101/120
	I0729 13:37:26.475351  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 102/120
	I0729 13:37:27.476793  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 103/120
	I0729 13:37:28.478058  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 104/120
	I0729 13:37:29.479712  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 105/120
	I0729 13:37:30.481102  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 106/120
	I0729 13:37:31.482431  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 107/120
	I0729 13:37:32.483975  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 108/120
	I0729 13:37:33.485165  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 109/120
	I0729 13:37:34.486456  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 110/120
	I0729 13:37:35.487854  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 111/120
	I0729 13:37:36.489325  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 112/120
	I0729 13:37:37.490733  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 113/120
	I0729 13:37:38.492186  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 114/120
	I0729 13:37:39.494264  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 115/120
	I0729 13:37:40.495776  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 116/120
	I0729 13:37:41.497040  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 117/120
	I0729 13:37:42.498268  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 118/120
	I0729 13:37:43.499468  998559 main.go:141] libmachine: (ha-104111-m03) Waiting for machine to stop 119/120
	I0729 13:37:44.500001  998559 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 13:37:44.500073  998559 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 13:37:44.502202  998559 out.go:177] 
	W0729 13:37:44.503748  998559 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 13:37:44.503764  998559 out.go:239] * 
	* 
	W0729 13:37:44.508334  998559 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 13:37:44.509587  998559 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-104111 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-104111 --wait=true -v=7 --alsologtostderr
E0729 13:39:30.662415  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
E0729 13:40:53.710182  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
E0729 13:42:06.665777  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-104111 --wait=true -v=7 --alsologtostderr: (4m44.078035024s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-104111
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-104111 -n ha-104111
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-104111 logs -n 25: (1.776603226s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-104111 cp ha-104111-m03:/home/docker/cp-test.txt                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m02:/home/docker/cp-test_ha-104111-m03_ha-104111-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n ha-104111-m02 sudo cat                                          | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-104111-m03_ha-104111-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-104111 cp ha-104111-m03:/home/docker/cp-test.txt                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04:/home/docker/cp-test_ha-104111-m03_ha-104111-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n ha-104111-m04 sudo cat                                          | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-104111-m03_ha-104111-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-104111 cp testdata/cp-test.txt                                                | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-104111 cp ha-104111-m04:/home/docker/cp-test.txt                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3327814908/001/cp-test_ha-104111-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-104111 cp ha-104111-m04:/home/docker/cp-test.txt                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111:/home/docker/cp-test_ha-104111-m04_ha-104111.txt                       |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n ha-104111 sudo cat                                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-104111-m04_ha-104111.txt                                 |           |         |         |                     |                     |
	| cp      | ha-104111 cp ha-104111-m04:/home/docker/cp-test.txt                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m02:/home/docker/cp-test_ha-104111-m04_ha-104111-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n ha-104111-m02 sudo cat                                          | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-104111-m04_ha-104111-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-104111 cp ha-104111-m04:/home/docker/cp-test.txt                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m03:/home/docker/cp-test_ha-104111-m04_ha-104111-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n ha-104111-m03 sudo cat                                          | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-104111-m04_ha-104111-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-104111 node stop m02 -v=7                                                     | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-104111 node start m02 -v=7                                                    | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-104111 -v=7                                                           | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:35 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-104111 -v=7                                                                | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:35 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-104111 --wait=true -v=7                                                    | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:37 UTC | 29 Jul 24 13:42 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-104111                                                                | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:42 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 13:37:44
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 13:37:44.557508  999015 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:37:44.557762  999015 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:37:44.557772  999015 out.go:304] Setting ErrFile to fd 2...
	I0729 13:37:44.557776  999015 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:37:44.558031  999015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 13:37:44.558643  999015 out.go:298] Setting JSON to false
	I0729 13:37:44.559666  999015 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12017,"bootTime":1722248248,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:37:44.559728  999015 start.go:139] virtualization: kvm guest
	I0729 13:37:44.561760  999015 out.go:177] * [ha-104111] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:37:44.563055  999015 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 13:37:44.563111  999015 notify.go:220] Checking for updates...
	I0729 13:37:44.565495  999015 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:37:44.566845  999015 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 13:37:44.568270  999015 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 13:37:44.569583  999015 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:37:44.570695  999015 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:37:44.572177  999015 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:37:44.572305  999015 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:37:44.572777  999015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:37:44.572864  999015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:37:44.589115  999015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33997
	I0729 13:37:44.589509  999015 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:37:44.590171  999015 main.go:141] libmachine: Using API Version  1
	I0729 13:37:44.590195  999015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:37:44.590579  999015 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:37:44.590802  999015 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:37:44.625137  999015 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 13:37:44.626309  999015 start.go:297] selected driver: kvm2
	I0729 13:37:44.626324  999015 start.go:901] validating driver "kvm2" against &{Name:ha-104111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-104111 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:37:44.626459  999015 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:37:44.626771  999015 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:37:44.626837  999015 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19338-974764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 13:37:44.641639  999015 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 13:37:44.642302  999015 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:37:44.642332  999015 cni.go:84] Creating CNI manager for ""
	I0729 13:37:44.642340  999015 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 13:37:44.642413  999015 start.go:340] cluster config:
	{Name:ha-104111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-104111 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:37:44.642545  999015 iso.go:125] acquiring lock: {Name:mk2bc72146110e230952d77b90cad2ea8182c9d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:37:44.644113  999015 out.go:177] * Starting "ha-104111" primary control-plane node in "ha-104111" cluster
	I0729 13:37:44.645161  999015 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:37:44.645192  999015 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 13:37:44.645202  999015 cache.go:56] Caching tarball of preloaded images
	I0729 13:37:44.645265  999015 preload.go:172] Found /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:37:44.645275  999015 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 13:37:44.645393  999015 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/config.json ...
	I0729 13:37:44.645587  999015 start.go:360] acquireMachinesLock for ha-104111: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:37:44.645623  999015 start.go:364] duration metric: took 20.105µs to acquireMachinesLock for "ha-104111"
	I0729 13:37:44.645637  999015 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:37:44.645644  999015 fix.go:54] fixHost starting: 
	I0729 13:37:44.645908  999015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:37:44.645940  999015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:37:44.659667  999015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39173
	I0729 13:37:44.660083  999015 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:37:44.660552  999015 main.go:141] libmachine: Using API Version  1
	I0729 13:37:44.660573  999015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:37:44.660958  999015 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:37:44.661139  999015 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:37:44.661294  999015 main.go:141] libmachine: (ha-104111) Calling .GetState
	I0729 13:37:44.662748  999015 fix.go:112] recreateIfNeeded on ha-104111: state=Running err=<nil>
	W0729 13:37:44.662787  999015 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:37:44.664558  999015 out.go:177] * Updating the running kvm2 "ha-104111" VM ...
	I0729 13:37:44.665772  999015 machine.go:94] provisionDockerMachine start ...
	I0729 13:37:44.665788  999015 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:37:44.665987  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:37:44.667977  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:44.668381  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:37:44.668460  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:44.668525  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:37:44.668680  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:37:44.668837  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:37:44.668977  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:37:44.669125  999015 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:44.669315  999015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 13:37:44.669327  999015 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:37:44.781590  999015 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-104111
	
	I0729 13:37:44.781633  999015 main.go:141] libmachine: (ha-104111) Calling .GetMachineName
	I0729 13:37:44.781915  999015 buildroot.go:166] provisioning hostname "ha-104111"
	I0729 13:37:44.781949  999015 main.go:141] libmachine: (ha-104111) Calling .GetMachineName
	I0729 13:37:44.782172  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:37:44.784931  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:44.785291  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:37:44.785316  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:44.785428  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:37:44.785633  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:37:44.785807  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:37:44.785978  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:37:44.786170  999015 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:44.786370  999015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 13:37:44.786388  999015 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-104111 && echo "ha-104111" | sudo tee /etc/hostname
	I0729 13:37:44.911175  999015 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-104111
	
	I0729 13:37:44.911201  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:37:44.913781  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:44.914138  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:37:44.914169  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:44.914327  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:37:44.914523  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:37:44.914696  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:37:44.914823  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:37:44.914992  999015 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:44.915253  999015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 13:37:44.915275  999015 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-104111' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-104111/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-104111' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:37:45.025302  999015 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:37:45.025335  999015 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 13:37:45.025400  999015 buildroot.go:174] setting up certificates
	I0729 13:37:45.025412  999015 provision.go:84] configureAuth start
	I0729 13:37:45.025426  999015 main.go:141] libmachine: (ha-104111) Calling .GetMachineName
	I0729 13:37:45.025717  999015 main.go:141] libmachine: (ha-104111) Calling .GetIP
	I0729 13:37:45.028316  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:45.028653  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:37:45.028681  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:45.028825  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:37:45.030976  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:45.031314  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:37:45.031340  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:45.031412  999015 provision.go:143] copyHostCerts
	I0729 13:37:45.031454  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 13:37:45.031496  999015 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 13:37:45.031507  999015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 13:37:45.031575  999015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 13:37:45.031683  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 13:37:45.031705  999015 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 13:37:45.031712  999015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 13:37:45.031736  999015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 13:37:45.031803  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 13:37:45.031819  999015 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 13:37:45.031832  999015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 13:37:45.031855  999015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 13:37:45.031915  999015 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.ha-104111 san=[127.0.0.1 192.168.39.120 ha-104111 localhost minikube]
	I0729 13:37:45.247889  999015 provision.go:177] copyRemoteCerts
	I0729 13:37:45.247955  999015 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:37:45.247986  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:37:45.250548  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:45.250858  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:37:45.250888  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:45.251015  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:37:45.251206  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:37:45.251342  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:37:45.251444  999015 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:37:45.334918  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 13:37:45.335025  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:37:45.360087  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 13:37:45.360167  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0729 13:37:45.385151  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 13:37:45.385217  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 13:37:45.408311  999015 provision.go:87] duration metric: took 382.883645ms to configureAuth
	I0729 13:37:45.408337  999015 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:37:45.408593  999015 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:37:45.408671  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:37:45.411136  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:45.411465  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:37:45.411489  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:45.411662  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:37:45.411867  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:37:45.412039  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:37:45.412138  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:37:45.412303  999015 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:45.412500  999015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 13:37:45.412516  999015 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:39:16.207069  999015 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:39:16.207137  999015 machine.go:97] duration metric: took 1m31.541350835s to provisionDockerMachine
	I0729 13:39:16.207162  999015 start.go:293] postStartSetup for "ha-104111" (driver="kvm2")
	I0729 13:39:16.207178  999015 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:39:16.207202  999015 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:39:16.207611  999015 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:39:16.207655  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:39:16.210870  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:16.211350  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:39:16.211379  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:16.211565  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:39:16.211785  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:39:16.211970  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:39:16.212111  999015 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:39:16.299727  999015 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:39:16.304015  999015 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:39:16.304040  999015 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 13:39:16.304110  999015 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 13:39:16.304229  999015 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 13:39:16.304245  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> /etc/ssl/certs/9820462.pem
	I0729 13:39:16.304360  999015 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:39:16.313389  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 13:39:16.336977  999015 start.go:296] duration metric: took 129.801096ms for postStartSetup
	I0729 13:39:16.337021  999015 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:39:16.337808  999015 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0729 13:39:16.337845  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:39:16.341241  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:16.341577  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:39:16.341600  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:16.341785  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:39:16.341959  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:39:16.342087  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:39:16.342217  999015 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	W0729 13:39:16.426698  999015 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0729 13:39:16.426728  999015 fix.go:56] duration metric: took 1m31.781083968s for fixHost
	I0729 13:39:16.426750  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:39:16.429429  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:16.429861  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:39:16.429899  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:16.430076  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:39:16.430313  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:39:16.430497  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:39:16.430657  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:39:16.430806  999015 main.go:141] libmachine: Using SSH client type: native
	I0729 13:39:16.430995  999015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 13:39:16.431009  999015 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:39:16.540878  999015 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722260356.497294306
	
	I0729 13:39:16.540903  999015 fix.go:216] guest clock: 1722260356.497294306
	I0729 13:39:16.540912  999015 fix.go:229] Guest: 2024-07-29 13:39:16.497294306 +0000 UTC Remote: 2024-07-29 13:39:16.42673407 +0000 UTC m=+91.906013509 (delta=70.560236ms)
	I0729 13:39:16.540939  999015 fix.go:200] guest clock delta is within tolerance: 70.560236ms
	I0729 13:39:16.540948  999015 start.go:83] releasing machines lock for "ha-104111", held for 1m31.895313773s
	I0729 13:39:16.540978  999015 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:39:16.541226  999015 main.go:141] libmachine: (ha-104111) Calling .GetIP
	I0729 13:39:16.543728  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:16.544101  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:39:16.544121  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:16.544265  999015 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:39:16.544798  999015 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:39:16.544960  999015 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:39:16.545036  999015 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:39:16.545091  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:39:16.545189  999015 ssh_runner.go:195] Run: cat /version.json
	I0729 13:39:16.545213  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:39:16.547611  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:16.547819  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:16.548020  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:39:16.548043  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:16.548196  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:39:16.548211  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:39:16.548224  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:16.548425  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:39:16.548440  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:39:16.548618  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:39:16.548625  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:39:16.548798  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:39:16.548795  999015 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:39:16.548915  999015 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:39:16.650013  999015 ssh_runner.go:195] Run: systemctl --version
	I0729 13:39:16.656015  999015 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:39:16.819246  999015 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:39:16.825417  999015 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:39:16.825493  999015 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:39:16.835135  999015 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 13:39:16.835163  999015 start.go:495] detecting cgroup driver to use...
	I0729 13:39:16.835235  999015 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:39:16.855213  999015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:39:16.869132  999015 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:39:16.869201  999015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:39:16.883555  999015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:39:16.897616  999015 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:39:17.058805  999015 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:39:17.201726  999015 docker.go:233] disabling docker service ...
	I0729 13:39:17.201810  999015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:39:17.220066  999015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:39:17.234734  999015 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:39:17.378527  999015 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:39:17.519684  999015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:39:17.534499  999015 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:39:17.553557  999015 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:39:17.553632  999015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:39:17.564559  999015 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:39:17.564639  999015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:39:17.576170  999015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:39:17.586652  999015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:39:17.597012  999015 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:39:17.608509  999015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:39:17.619134  999015 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:39:17.630264  999015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:39:17.641178  999015 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:39:17.651662  999015 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:39:17.661699  999015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:39:17.805400  999015 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:39:24.335673  999015 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.530232076s)
	I0729 13:39:24.335711  999015 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:39:24.335761  999015 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:39:24.341250  999015 start.go:563] Will wait 60s for crictl version
	I0729 13:39:24.341327  999015 ssh_runner.go:195] Run: which crictl
	I0729 13:39:24.345105  999015 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:39:24.385651  999015 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:39:24.385761  999015 ssh_runner.go:195] Run: crio --version
	I0729 13:39:24.415826  999015 ssh_runner.go:195] Run: crio --version
	I0729 13:39:24.446246  999015 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:39:24.447746  999015 main.go:141] libmachine: (ha-104111) Calling .GetIP
	I0729 13:39:24.450264  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:24.450620  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:39:24.450645  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:24.450854  999015 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 13:39:24.455737  999015 kubeadm.go:883] updating cluster {Name:ha-104111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-104111 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:39:24.455900  999015 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:39:24.455966  999015 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:39:24.504605  999015 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:39:24.504627  999015 crio.go:433] Images already preloaded, skipping extraction
	I0729 13:39:24.504676  999015 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:39:24.541116  999015 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:39:24.541142  999015 cache_images.go:84] Images are preloaded, skipping loading
	I0729 13:39:24.541154  999015 kubeadm.go:934] updating node { 192.168.39.120 8443 v1.30.3 crio true true} ...
	I0729 13:39:24.541281  999015 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-104111 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-104111 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:39:24.541365  999015 ssh_runner.go:195] Run: crio config
	I0729 13:39:24.586689  999015 cni.go:84] Creating CNI manager for ""
	I0729 13:39:24.586707  999015 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 13:39:24.586717  999015 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:39:24.586751  999015 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.120 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-104111 NodeName:ha-104111 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:39:24.586918  999015 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-104111"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.120
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.120"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:39:24.586937  999015 kube-vip.go:115] generating kube-vip config ...
	I0729 13:39:24.586979  999015 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 13:39:24.598621  999015 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 13:39:24.598729  999015 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 13:39:24.598775  999015 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 13:39:24.608712  999015 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:39:24.608774  999015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 13:39:24.618073  999015 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 13:39:24.634958  999015 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:39:24.651274  999015 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 13:39:24.667064  999015 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 13:39:24.685546  999015 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 13:39:24.689642  999015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:39:24.839709  999015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:39:24.855145  999015 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111 for IP: 192.168.39.120
	I0729 13:39:24.855173  999015 certs.go:194] generating shared ca certs ...
	I0729 13:39:24.855197  999015 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:39:24.855376  999015 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 13:39:24.855428  999015 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 13:39:24.855444  999015 certs.go:256] generating profile certs ...
	I0729 13:39:24.855541  999015 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.key
	I0729 13:39:24.855591  999015 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.f7411b89
	I0729 13:39:24.855617  999015 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.f7411b89 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.120 192.168.39.140 192.168.39.202 192.168.39.254]
	I0729 13:39:25.455220  999015 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.f7411b89 ...
	I0729 13:39:25.455254  999015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.f7411b89: {Name:mkc892b43f1affb5d7cb9aed542c4f523db3f899 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:39:25.455430  999015 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.f7411b89 ...
	I0729 13:39:25.455443  999015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.f7411b89: {Name:mk588100037b213c74cb58afa74cbaa38d605002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:39:25.455509  999015 certs.go:381] copying /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.f7411b89 -> /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt
	I0729 13:39:25.455670  999015 certs.go:385] copying /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.f7411b89 -> /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key
	I0729 13:39:25.455819  999015 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key
	I0729 13:39:25.455836  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 13:39:25.455849  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 13:39:25.455862  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 13:39:25.455874  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 13:39:25.455885  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 13:39:25.455900  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 13:39:25.455911  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 13:39:25.455922  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 13:39:25.455972  999015 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 13:39:25.455999  999015 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 13:39:25.456010  999015 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:39:25.456029  999015 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:39:25.456055  999015 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:39:25.456071  999015 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 13:39:25.456109  999015 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 13:39:25.456134  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem -> /usr/share/ca-certificates/982046.pem
	I0729 13:39:25.456147  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> /usr/share/ca-certificates/9820462.pem
	I0729 13:39:25.456159  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:39:25.456883  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:39:25.483262  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:39:25.506979  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:39:25.530915  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 13:39:25.555704  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 13:39:25.580023  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 13:39:25.605011  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:39:25.629923  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 13:39:25.654336  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 13:39:25.678545  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 13:39:25.702249  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:39:25.727489  999015 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:39:25.744359  999015 ssh_runner.go:195] Run: openssl version
	I0729 13:39:25.750381  999015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 13:39:25.761422  999015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 13:39:25.766248  999015 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 13:39:25.766322  999015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 13:39:25.772261  999015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 13:39:25.781853  999015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 13:39:25.792847  999015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 13:39:25.797342  999015 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 13:39:25.797412  999015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 13:39:25.803282  999015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:39:25.812721  999015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:39:25.823697  999015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:39:25.828204  999015 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:39:25.828257  999015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:39:25.834118  999015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:39:25.843359  999015 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:39:25.847938  999015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:39:25.853595  999015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:39:25.859159  999015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:39:25.864829  999015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:39:25.870770  999015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:39:25.876341  999015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:39:25.881993  999015 kubeadm.go:392] StartCluster: {Name:ha-104111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-104111 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:39:25.882142  999015 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:39:25.882200  999015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:39:25.917898  999015 cri.go:89] found id: "2b0b7b8d86d18e00d4def7e6ddd73b2109b7288d493f9c89c24f94c090aef401"
	I0729 13:39:25.917930  999015 cri.go:89] found id: "bd4ab1bad8ca9aaebbf3bdfabd8a7687a660ee5c1b925250be9a30f1eaeaf8b6"
	I0729 13:39:25.917936  999015 cri.go:89] found id: "f147ff5fea55b8c95ea2194801e766e3c3a118adbcbb0aa920a07bf4dd04b550"
	I0729 13:39:25.917940  999015 cri.go:89] found id: "1b86114506804a9a93ef3ca6b2254579d26919db666922aefc5ccef849a81f98"
	I0729 13:39:25.917944  999015 cri.go:89] found id: "721762ac4017a84454febe3fd71ab6671be9e230d7b785627b43bdafe8478d56"
	I0729 13:39:25.917948  999015 cri.go:89] found id: "81eca3ce5b15d81536c74dc285118c00ec013710df992caad1867b7c5e7f75a1"
	I0729 13:39:25.917953  999015 cri.go:89] found id: "8fcba14c355c5fc17ddee3394c79f5ffaea681079c29edd33b122d7aa80c36f1"
	I0729 13:39:25.917956  999015 cri.go:89] found id: "6bc357136c66b6120efe2eee1197a8d0dabec7279ff50cf8ddea25182b0d4ae8"
	I0729 13:39:25.917960  999015 cri.go:89] found id: "50fe26dbcca1ab5b9d9fe7412b4950133069510ae36d9795ccd4866be11174cd"
	I0729 13:39:25.917969  999015 cri.go:89] found id: "e4cce61f41e5d4c78880527c2ae828948f376f275ed748786922e82b85a740b3"
	I0729 13:39:25.917973  999015 cri.go:89] found id: "8a9167ef54b81a6562000186bed478646762de7fef6329053c2018869987fdda"
	I0729 13:39:25.917977  999015 cri.go:89] found id: "e80af660361f5449de6286725b48cc816a32581672735b35e4ac2c55495983d1"
	I0729 13:39:25.917981  999015 cri.go:89] found id: "7606e1f107d6cded50cb09c9c101a2cac785cbcf697b2ffdecb599d2e148de2a"
	I0729 13:39:25.917985  999015 cri.go:89] found id: ""
	I0729 13:39:25.918042  999015 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 13:42:29 ha-104111 crio[3719]: time="2024-07-29 13:42:29.302818244Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f7df9b10-f332-4121-9ad0-0e351b23d2fd name=/runtime.v1.RuntimeService/Version
	Jul 29 13:42:29 ha-104111 crio[3719]: time="2024-07-29 13:42:29.303853317Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c280fa42-b72e-4ed0-bf7a-2a363423673e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:42:29 ha-104111 crio[3719]: time="2024-07-29 13:42:29.304902789Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722260549304874081,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c280fa42-b72e-4ed0-bf7a-2a363423673e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:42:29 ha-104111 crio[3719]: time="2024-07-29 13:42:29.305402023Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4d4c4a52-291c-4f4a-9f9a-09018ba9d146 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:42:29 ha-104111 crio[3719]: time="2024-07-29 13:42:29.305504161Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4d4c4a52-291c-4f4a-9f9a-09018ba9d146 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:42:29 ha-104111 crio[3719]: time="2024-07-29 13:42:29.305987039Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:145852da7055033c14ba5cfc60ad0ea2687b2a74198afdedffbb8618b96d9b22,PodSandboxId:f2717ec1ae79803a690b6a2d0649001089025bae4c8cff3f073535c63e3e877d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260458255696085,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61cc52e-771b-484a-99d6-8963665cb1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 492a79cd,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d170d569040e71ebba12d18aa4488079f9988b7cd21fa4b35b0e965b490e63f4,PodSandboxId:8e5557e2793d9874f8b9c116d635669034711c4d2ad63fb97625509596ebf5aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722260417259734776,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdd0eae9701cf7d4013ed5835b6fc65,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a3169f4bbf4da4cbecfa9b2982c6bf76dcc3b02c756aafe16fcbd013e5ecbc,PodSandboxId:0903a4fd4a46c6933b98e378c71522c67f4979f66eb8f977c8a9f63c18e11e8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722260408269110931,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc73c358411265f24a0fdb288ab5434e,},Annotations:map[string]string{io.kubernetes.container.hash: fa230ea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae091ed0f6e9492fda87cc2caa224d81e5d4aac495f07c0b6a5c340ba7ed513f,PodSandboxId:f2717ec1ae79803a690b6a2d0649001089025bae4c8cff3f073535c63e3e877d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722260408251768596,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61cc52e-771b-484a-99d6-8963665cb1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 492a79cd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05bd816fab2d6a451651796d416a47ddea0ce0473c46f90b616d363041375e97,PodSandboxId:ab05b140cff1204b991bd7964a53af7a5b126994e94153a0ba209b48bd39a7a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722260404499337932,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7xsjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fbdab29-6a6d-4b47-8df5-641b9aad98f0,},Annotations:map[string]string{io.kubernetes.container.hash: 769a90,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ae26ae158cd144a916094ec9de5203551b5ddf380a59bb97405a628a465853,PodSandboxId:e0a4ba4b9a4396c40863b9763e6615ffac4ec97d757615f49da1dcb73e76dab9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722260383413902872,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cb1cfb5a4b597f16540b93a08b39fcb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73307e1396301309c14bd627fe5c2d872ed0db38a27877dbd101abca636145e,PodSandboxId:f939461f859dd217f1d4a7d930422b89163056a724667bffb7622618075366ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722260371445869052,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6kkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,},Annotations:map[string]string{io.kubernetes.container.hash: 32c1dc3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:b9f80c23aaa9ea40aa6a16faaf7c226865927c3b2060d3c13580ec1ea78239c8,PodSandboxId:9d24fc0c1911edb744055edeac92bccb8147a1cd2047d390c20d23df3915bda2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260371345361388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gcf7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196981ba-ed16-427c-ae8b-9b7e8ff36be2,},Annotations:map[string]string{io.kubernetes.container.hash: 4a624f81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5d35941c2b55304add09b92346c5ee66dae70db5d631b55a4c9e748a78cc1c,PodSandboxId:d67531e4a04283651aa18171636b9bca58978e45dc52287ab53b6defe85acbc3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722260371146403737,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9phpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e9c45f-5176-492e-90c7-49b0201afe1e,},Annotations:map[string]string{io.kubernetes.container.hash: 867b7308,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba3613e0502120e9000731192ebe7375f108449671e6f56de82157ff90cf4f29,PodSandboxId:fd64d3f03154a4e06ac82e12322275445023cc4fe165a137fd4e847380aeb121,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722260371248139179,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cb06508783f1cdddfbd3cd4c58d73c,},Annotations:map[string]string{io.kubernetes.container.hash: 9edecd9f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14f317d46e88a1b1f268e7cd9b0473d77ffe2525b92da2cf156d1ecd2c489c4,PodSandboxId:8e5557e2793d9874f8b9c116d635669034711c4d2ad63fb97625509596ebf5aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722260371087205050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdd0eae9701cf7d4013ed5835b6fc65,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ef5416acf6856c647ca6214560e4d2bfefeb7e1378e60628b77e748be400ac,PodSandboxId:0903a4fd4a46c6933b98e378c71522c67f4979f66eb8f977c8a9f63c18e11e8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722260371001033922,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc73c358411265f24a0fdb288ab5434e,},Annotations:map[string]string{io.kubernetes.container.hash: fa230ea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6682bba661cd30852620bb8cae70c5cb86f9db46e582a6b7776a9f88229b529b,PodSandboxId:fe3b7fb7cf5579d4a2607d42cee331c75364076d8e03c034b0015e7deb4fcc21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722260370948205305,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2438b1d75fb1de3aa096517b67661add,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aae34217228bbdcb590cfc9744c1b1b7f8689f1ba0cf1c5323d27762bf4d209,PodSandboxId:fb6786851a9e9b5ef9c165e6c255e9262884c7e2ca9578cfbc4e403b2f05a484,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260366765075543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9jrnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453ed97-efb4-41c1-8bfb-e7e004e618e0,},Annotations:map[string]string{io.kubernetes.container.hash: 72f497a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"n
ame\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2a033e8feb22aa74a1673da73b3c5bab08248299304e6a34a7f7064468eeb5d,PodSandboxId:ab591310b6636199927f6aca9abcc3a68cb2149e7f2001e4ffcd7ce08d966de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722259875257881458,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7xsjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fbdab29-6a6d-4b47-8df5-641b9aad98f0,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 769a90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721762ac4017a84454febe3fd71ab6671be9e230d7b785627b43bdafe8478d56,PodSandboxId:d0c4b0845fee9c8c5a409d1d96017b0e56e37a4fb5f685b4e69bc4626c12ffd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259743321120007,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9jrnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453ed97-efb4-41c1-8bfb-e7e004e618e0,},Annotations:map[string]string{io.kubernet
es.container.hash: 72f497a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81eca3ce5b15d81536c74dc285118c00ec013710df992caad1867b7c5e7f75a1,PodSandboxId:32a1e9c01260e16fc517c9caebdd1556fbbfcaabc5ff83ba679e2ce763d3ee50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259743268949875,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-7db6d8ff4d-gcf7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196981ba-ed16-427c-ae8b-9b7e8ff36be2,},Annotations:map[string]string{io.kubernetes.container.hash: 4a624f81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcba14c355c5fc17ddee3394c79f5ffaea681079c29edd33b122d7aa80c36f1,PodSandboxId:6b9961791d750fd9d9a7d40cf02ff0c0f6e938c724b2b0787ebeb23a431b9beb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722259731581881315,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9phpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e9c45f-5176-492e-90c7-49b0201afe1e,},Annotations:map[string]string{io.kubernetes.container.hash: 867b7308,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc357136c66b6120efe2eee1197a8d0dabec7279ff50cf8ddea25182b0d4ae8,PodSandboxId:60e3f945e9e89cf01f1a900e939ca51214ea0d79a2a69da731b49606960a6d05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722259728097871714,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6kkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,},Annotations:map[string]string{io.kubernetes.container.hash: 32c1dc3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e80af660361f5449de6286725b48cc816a32581672735b35e4ac2c55495983d1,PodSandboxId:a33809de7d3a6efb269fca0ca670a49eb3a11c9845507c3110c8509574ae03e0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722
eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722259707030052585,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2438b1d75fb1de3aa096517b67661add,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7606e1f107d6cded50cb09c9c101a2cac785cbcf697b2ffdecb599d2e148de2a,PodSandboxId:3c847132555037fab395549f498d9f9aad2f651da1470981906bc62a560c615c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd
477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722259706969359945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cb06508783f1cdddfbd3cd4c58d73c,},Annotations:map[string]string{io.kubernetes.container.hash: 9edecd9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4d4c4a52-291c-4f4a-9f9a-09018ba9d146 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:42:29 ha-104111 crio[3719]: time="2024-07-29 13:42:29.353171335Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bd33452e-b992-4a38-a411-32a24cf69716 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:42:29 ha-104111 crio[3719]: time="2024-07-29 13:42:29.353247578Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bd33452e-b992-4a38-a411-32a24cf69716 name=/runtime.v1.RuntimeService/Version
	Jul 29 13:42:29 ha-104111 crio[3719]: time="2024-07-29 13:42:29.355656814Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=662c448b-4e0d-4eff-b995-d7b69bd09fe4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:42:29 ha-104111 crio[3719]: time="2024-07-29 13:42:29.356073095Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722260549356051230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=662c448b-4e0d-4eff-b995-d7b69bd09fe4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:42:29 ha-104111 crio[3719]: time="2024-07-29 13:42:29.357504571Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2d2656e-69f3-4fb2-894a-535490d96e94 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:42:29 ha-104111 crio[3719]: time="2024-07-29 13:42:29.357702456Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2d2656e-69f3-4fb2-894a-535490d96e94 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:42:29 ha-104111 crio[3719]: time="2024-07-29 13:42:29.359030923Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:145852da7055033c14ba5cfc60ad0ea2687b2a74198afdedffbb8618b96d9b22,PodSandboxId:f2717ec1ae79803a690b6a2d0649001089025bae4c8cff3f073535c63e3e877d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260458255696085,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61cc52e-771b-484a-99d6-8963665cb1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 492a79cd,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d170d569040e71ebba12d18aa4488079f9988b7cd21fa4b35b0e965b490e63f4,PodSandboxId:8e5557e2793d9874f8b9c116d635669034711c4d2ad63fb97625509596ebf5aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722260417259734776,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdd0eae9701cf7d4013ed5835b6fc65,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a3169f4bbf4da4cbecfa9b2982c6bf76dcc3b02c756aafe16fcbd013e5ecbc,PodSandboxId:0903a4fd4a46c6933b98e378c71522c67f4979f66eb8f977c8a9f63c18e11e8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722260408269110931,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc73c358411265f24a0fdb288ab5434e,},Annotations:map[string]string{io.kubernetes.container.hash: fa230ea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae091ed0f6e9492fda87cc2caa224d81e5d4aac495f07c0b6a5c340ba7ed513f,PodSandboxId:f2717ec1ae79803a690b6a2d0649001089025bae4c8cff3f073535c63e3e877d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722260408251768596,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61cc52e-771b-484a-99d6-8963665cb1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 492a79cd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05bd816fab2d6a451651796d416a47ddea0ce0473c46f90b616d363041375e97,PodSandboxId:ab05b140cff1204b991bd7964a53af7a5b126994e94153a0ba209b48bd39a7a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722260404499337932,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7xsjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fbdab29-6a6d-4b47-8df5-641b9aad98f0,},Annotations:map[string]string{io.kubernetes.container.hash: 769a90,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ae26ae158cd144a916094ec9de5203551b5ddf380a59bb97405a628a465853,PodSandboxId:e0a4ba4b9a4396c40863b9763e6615ffac4ec97d757615f49da1dcb73e76dab9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722260383413902872,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cb1cfb5a4b597f16540b93a08b39fcb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73307e1396301309c14bd627fe5c2d872ed0db38a27877dbd101abca636145e,PodSandboxId:f939461f859dd217f1d4a7d930422b89163056a724667bffb7622618075366ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722260371445869052,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6kkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,},Annotations:map[string]string{io.kubernetes.container.hash: 32c1dc3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:b9f80c23aaa9ea40aa6a16faaf7c226865927c3b2060d3c13580ec1ea78239c8,PodSandboxId:9d24fc0c1911edb744055edeac92bccb8147a1cd2047d390c20d23df3915bda2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260371345361388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gcf7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196981ba-ed16-427c-ae8b-9b7e8ff36be2,},Annotations:map[string]string{io.kubernetes.container.hash: 4a624f81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5d35941c2b55304add09b92346c5ee66dae70db5d631b55a4c9e748a78cc1c,PodSandboxId:d67531e4a04283651aa18171636b9bca58978e45dc52287ab53b6defe85acbc3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722260371146403737,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9phpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e9c45f-5176-492e-90c7-49b0201afe1e,},Annotations:map[string]string{io.kubernetes.container.hash: 867b7308,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba3613e0502120e9000731192ebe7375f108449671e6f56de82157ff90cf4f29,PodSandboxId:fd64d3f03154a4e06ac82e12322275445023cc4fe165a137fd4e847380aeb121,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722260371248139179,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cb06508783f1cdddfbd3cd4c58d73c,},Annotations:map[string]string{io.kubernetes.container.hash: 9edecd9f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14f317d46e88a1b1f268e7cd9b0473d77ffe2525b92da2cf156d1ecd2c489c4,PodSandboxId:8e5557e2793d9874f8b9c116d635669034711c4d2ad63fb97625509596ebf5aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722260371087205050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdd0eae9701cf7d4013ed5835b6fc65,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ef5416acf6856c647ca6214560e4d2bfefeb7e1378e60628b77e748be400ac,PodSandboxId:0903a4fd4a46c6933b98e378c71522c67f4979f66eb8f977c8a9f63c18e11e8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722260371001033922,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc73c358411265f24a0fdb288ab5434e,},Annotations:map[string]string{io.kubernetes.container.hash: fa230ea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6682bba661cd30852620bb8cae70c5cb86f9db46e582a6b7776a9f88229b529b,PodSandboxId:fe3b7fb7cf5579d4a2607d42cee331c75364076d8e03c034b0015e7deb4fcc21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722260370948205305,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2438b1d75fb1de3aa096517b67661add,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aae34217228bbdcb590cfc9744c1b1b7f8689f1ba0cf1c5323d27762bf4d209,PodSandboxId:fb6786851a9e9b5ef9c165e6c255e9262884c7e2ca9578cfbc4e403b2f05a484,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260366765075543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9jrnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453ed97-efb4-41c1-8bfb-e7e004e618e0,},Annotations:map[string]string{io.kubernetes.container.hash: 72f497a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"n
ame\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2a033e8feb22aa74a1673da73b3c5bab08248299304e6a34a7f7064468eeb5d,PodSandboxId:ab591310b6636199927f6aca9abcc3a68cb2149e7f2001e4ffcd7ce08d966de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722259875257881458,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7xsjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fbdab29-6a6d-4b47-8df5-641b9aad98f0,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 769a90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721762ac4017a84454febe3fd71ab6671be9e230d7b785627b43bdafe8478d56,PodSandboxId:d0c4b0845fee9c8c5a409d1d96017b0e56e37a4fb5f685b4e69bc4626c12ffd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259743321120007,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9jrnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453ed97-efb4-41c1-8bfb-e7e004e618e0,},Annotations:map[string]string{io.kubernet
es.container.hash: 72f497a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81eca3ce5b15d81536c74dc285118c00ec013710df992caad1867b7c5e7f75a1,PodSandboxId:32a1e9c01260e16fc517c9caebdd1556fbbfcaabc5ff83ba679e2ce763d3ee50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259743268949875,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-7db6d8ff4d-gcf7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196981ba-ed16-427c-ae8b-9b7e8ff36be2,},Annotations:map[string]string{io.kubernetes.container.hash: 4a624f81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcba14c355c5fc17ddee3394c79f5ffaea681079c29edd33b122d7aa80c36f1,PodSandboxId:6b9961791d750fd9d9a7d40cf02ff0c0f6e938c724b2b0787ebeb23a431b9beb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722259731581881315,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9phpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e9c45f-5176-492e-90c7-49b0201afe1e,},Annotations:map[string]string{io.kubernetes.container.hash: 867b7308,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc357136c66b6120efe2eee1197a8d0dabec7279ff50cf8ddea25182b0d4ae8,PodSandboxId:60e3f945e9e89cf01f1a900e939ca51214ea0d79a2a69da731b49606960a6d05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722259728097871714,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6kkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,},Annotations:map[string]string{io.kubernetes.container.hash: 32c1dc3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e80af660361f5449de6286725b48cc816a32581672735b35e4ac2c55495983d1,PodSandboxId:a33809de7d3a6efb269fca0ca670a49eb3a11c9845507c3110c8509574ae03e0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722
eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722259707030052585,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2438b1d75fb1de3aa096517b67661add,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7606e1f107d6cded50cb09c9c101a2cac785cbcf697b2ffdecb599d2e148de2a,PodSandboxId:3c847132555037fab395549f498d9f9aad2f651da1470981906bc62a560c615c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd
477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722259706969359945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cb06508783f1cdddfbd3cd4c58d73c,},Annotations:map[string]string{io.kubernetes.container.hash: 9edecd9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f2d2656e-69f3-4fb2-894a-535490d96e94 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:42:29 ha-104111 crio[3719]: time="2024-07-29 13:42:29.413879074Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8db3abf6-373b-495e-b3d0-631a96dd0efc name=/runtime.v1.RuntimeService/Version
	Jul 29 13:42:29 ha-104111 crio[3719]: time="2024-07-29 13:42:29.413955204Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8db3abf6-373b-495e-b3d0-631a96dd0efc name=/runtime.v1.RuntimeService/Version
	Jul 29 13:42:29 ha-104111 crio[3719]: time="2024-07-29 13:42:29.415785608Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe8ed78a-3a0d-4a30-96ef-17a629a49431 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:42:29 ha-104111 crio[3719]: time="2024-07-29 13:42:29.416479178Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722260549416452483,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe8ed78a-3a0d-4a30-96ef-17a629a49431 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:42:29 ha-104111 crio[3719]: time="2024-07-29 13:42:29.417646856Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=39c8e3f8-5334-4b35-a367-7da3c7a6eec6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:42:29 ha-104111 crio[3719]: time="2024-07-29 13:42:29.417728516Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=39c8e3f8-5334-4b35-a367-7da3c7a6eec6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:42:29 ha-104111 crio[3719]: time="2024-07-29 13:42:29.418809476Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:145852da7055033c14ba5cfc60ad0ea2687b2a74198afdedffbb8618b96d9b22,PodSandboxId:f2717ec1ae79803a690b6a2d0649001089025bae4c8cff3f073535c63e3e877d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260458255696085,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61cc52e-771b-484a-99d6-8963665cb1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 492a79cd,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d170d569040e71ebba12d18aa4488079f9988b7cd21fa4b35b0e965b490e63f4,PodSandboxId:8e5557e2793d9874f8b9c116d635669034711c4d2ad63fb97625509596ebf5aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722260417259734776,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdd0eae9701cf7d4013ed5835b6fc65,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a3169f4bbf4da4cbecfa9b2982c6bf76dcc3b02c756aafe16fcbd013e5ecbc,PodSandboxId:0903a4fd4a46c6933b98e378c71522c67f4979f66eb8f977c8a9f63c18e11e8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722260408269110931,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc73c358411265f24a0fdb288ab5434e,},Annotations:map[string]string{io.kubernetes.container.hash: fa230ea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae091ed0f6e9492fda87cc2caa224d81e5d4aac495f07c0b6a5c340ba7ed513f,PodSandboxId:f2717ec1ae79803a690b6a2d0649001089025bae4c8cff3f073535c63e3e877d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722260408251768596,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61cc52e-771b-484a-99d6-8963665cb1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 492a79cd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05bd816fab2d6a451651796d416a47ddea0ce0473c46f90b616d363041375e97,PodSandboxId:ab05b140cff1204b991bd7964a53af7a5b126994e94153a0ba209b48bd39a7a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722260404499337932,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7xsjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fbdab29-6a6d-4b47-8df5-641b9aad98f0,},Annotations:map[string]string{io.kubernetes.container.hash: 769a90,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ae26ae158cd144a916094ec9de5203551b5ddf380a59bb97405a628a465853,PodSandboxId:e0a4ba4b9a4396c40863b9763e6615ffac4ec97d757615f49da1dcb73e76dab9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722260383413902872,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cb1cfb5a4b597f16540b93a08b39fcb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73307e1396301309c14bd627fe5c2d872ed0db38a27877dbd101abca636145e,PodSandboxId:f939461f859dd217f1d4a7d930422b89163056a724667bffb7622618075366ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722260371445869052,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6kkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,},Annotations:map[string]string{io.kubernetes.container.hash: 32c1dc3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:b9f80c23aaa9ea40aa6a16faaf7c226865927c3b2060d3c13580ec1ea78239c8,PodSandboxId:9d24fc0c1911edb744055edeac92bccb8147a1cd2047d390c20d23df3915bda2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260371345361388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gcf7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196981ba-ed16-427c-ae8b-9b7e8ff36be2,},Annotations:map[string]string{io.kubernetes.container.hash: 4a624f81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5d35941c2b55304add09b92346c5ee66dae70db5d631b55a4c9e748a78cc1c,PodSandboxId:d67531e4a04283651aa18171636b9bca58978e45dc52287ab53b6defe85acbc3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722260371146403737,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9phpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e9c45f-5176-492e-90c7-49b0201afe1e,},Annotations:map[string]string{io.kubernetes.container.hash: 867b7308,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba3613e0502120e9000731192ebe7375f108449671e6f56de82157ff90cf4f29,PodSandboxId:fd64d3f03154a4e06ac82e12322275445023cc4fe165a137fd4e847380aeb121,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722260371248139179,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cb06508783f1cdddfbd3cd4c58d73c,},Annotations:map[string]string{io.kubernetes.container.hash: 9edecd9f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14f317d46e88a1b1f268e7cd9b0473d77ffe2525b92da2cf156d1ecd2c489c4,PodSandboxId:8e5557e2793d9874f8b9c116d635669034711c4d2ad63fb97625509596ebf5aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722260371087205050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdd0eae9701cf7d4013ed5835b6fc65,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ef5416acf6856c647ca6214560e4d2bfefeb7e1378e60628b77e748be400ac,PodSandboxId:0903a4fd4a46c6933b98e378c71522c67f4979f66eb8f977c8a9f63c18e11e8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722260371001033922,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc73c358411265f24a0fdb288ab5434e,},Annotations:map[string]string{io.kubernetes.container.hash: fa230ea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6682bba661cd30852620bb8cae70c5cb86f9db46e582a6b7776a9f88229b529b,PodSandboxId:fe3b7fb7cf5579d4a2607d42cee331c75364076d8e03c034b0015e7deb4fcc21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722260370948205305,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2438b1d75fb1de3aa096517b67661add,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aae34217228bbdcb590cfc9744c1b1b7f8689f1ba0cf1c5323d27762bf4d209,PodSandboxId:fb6786851a9e9b5ef9c165e6c255e9262884c7e2ca9578cfbc4e403b2f05a484,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260366765075543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9jrnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453ed97-efb4-41c1-8bfb-e7e004e618e0,},Annotations:map[string]string{io.kubernetes.container.hash: 72f497a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"n
ame\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2a033e8feb22aa74a1673da73b3c5bab08248299304e6a34a7f7064468eeb5d,PodSandboxId:ab591310b6636199927f6aca9abcc3a68cb2149e7f2001e4ffcd7ce08d966de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722259875257881458,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7xsjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fbdab29-6a6d-4b47-8df5-641b9aad98f0,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 769a90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721762ac4017a84454febe3fd71ab6671be9e230d7b785627b43bdafe8478d56,PodSandboxId:d0c4b0845fee9c8c5a409d1d96017b0e56e37a4fb5f685b4e69bc4626c12ffd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259743321120007,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9jrnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453ed97-efb4-41c1-8bfb-e7e004e618e0,},Annotations:map[string]string{io.kubernet
es.container.hash: 72f497a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81eca3ce5b15d81536c74dc285118c00ec013710df992caad1867b7c5e7f75a1,PodSandboxId:32a1e9c01260e16fc517c9caebdd1556fbbfcaabc5ff83ba679e2ce763d3ee50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259743268949875,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-7db6d8ff4d-gcf7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196981ba-ed16-427c-ae8b-9b7e8ff36be2,},Annotations:map[string]string{io.kubernetes.container.hash: 4a624f81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcba14c355c5fc17ddee3394c79f5ffaea681079c29edd33b122d7aa80c36f1,PodSandboxId:6b9961791d750fd9d9a7d40cf02ff0c0f6e938c724b2b0787ebeb23a431b9beb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722259731581881315,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9phpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e9c45f-5176-492e-90c7-49b0201afe1e,},Annotations:map[string]string{io.kubernetes.container.hash: 867b7308,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc357136c66b6120efe2eee1197a8d0dabec7279ff50cf8ddea25182b0d4ae8,PodSandboxId:60e3f945e9e89cf01f1a900e939ca51214ea0d79a2a69da731b49606960a6d05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722259728097871714,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6kkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,},Annotations:map[string]string{io.kubernetes.container.hash: 32c1dc3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e80af660361f5449de6286725b48cc816a32581672735b35e4ac2c55495983d1,PodSandboxId:a33809de7d3a6efb269fca0ca670a49eb3a11c9845507c3110c8509574ae03e0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722
eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722259707030052585,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2438b1d75fb1de3aa096517b67661add,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7606e1f107d6cded50cb09c9c101a2cac785cbcf697b2ffdecb599d2e148de2a,PodSandboxId:3c847132555037fab395549f498d9f9aad2f651da1470981906bc62a560c615c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd
477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722259706969359945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cb06508783f1cdddfbd3cd4c58d73c,},Annotations:map[string]string{io.kubernetes.container.hash: 9edecd9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=39c8e3f8-5334-4b35-a367-7da3c7a6eec6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:42:29 ha-104111 crio[3719]: time="2024-07-29 13:42:29.450338540Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=db927f11-9527-4f98-a3ed-7f34b51c1d4e name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 13:42:29 ha-104111 crio[3719]: time="2024-07-29 13:42:29.451216113Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ab05b140cff1204b991bd7964a53af7a5b126994e94153a0ba209b48bd39a7a4,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-7xsjn,Uid:1fbdab29-6a6d-4b47-8df5-641b9aad98f0,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722260404368663621,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-7xsjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fbdab29-6a6d-4b47-8df5-641b9aad98f0,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T13:31:13.775614631Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e0a4ba4b9a4396c40863b9763e6615ffac4ec97d757615f49da1dcb73e76dab9,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-104111,Uid:7cb1cfb5a4b597f16540b93a08b39fcb,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1722260383324487024,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cb1cfb5a4b597f16540b93a08b39fcb,},Annotations:map[string]string{kubernetes.io/config.hash: 7cb1cfb5a4b597f16540b93a08b39fcb,kubernetes.io/config.seen: 2024-07-29T13:39:24.640459112Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9d24fc0c1911edb744055edeac92bccb8147a1cd2047d390c20d23df3915bda2,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-gcf7q,Uid:196981ba-ed16-427c-ae8b-9b7e8ff36be2,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722260370731883966,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-gcf7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196981ba-ed16-427c-ae8b-9b7e8ff36be2,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07
-29T13:29:02.742252134Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f939461f859dd217f1d4a7d930422b89163056a724667bffb7622618075366ef,Metadata:&PodSandboxMetadata{Name:kube-proxy-n6kkf,Uid:4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722260370679278330,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-n6kkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T13:28:46.158972524Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0903a4fd4a46c6933b98e378c71522c67f4979f66eb8f977c8a9f63c18e11e8f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-104111,Uid:dc73c358411265f24a0fdb288ab5434e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722260370675996570,Lab
els:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc73c358411265f24a0fdb288ab5434e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.120:8443,kubernetes.io/config.hash: dc73c358411265f24a0fdb288ab5434e,kubernetes.io/config.seen: 2024-07-29T13:28:33.197750844Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8e5557e2793d9874f8b9c116d635669034711c4d2ad63fb97625509596ebf5aa,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-104111,Uid:afdd0eae9701cf7d4013ed5835b6fc65,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722260370671325779,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af
dd0eae9701cf7d4013ed5835b6fc65,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: afdd0eae9701cf7d4013ed5835b6fc65,kubernetes.io/config.seen: 2024-07-29T13:28:33.197754134Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d67531e4a04283651aa18171636b9bca58978e45dc52287ab53b6defe85acbc3,Metadata:&PodSandboxMetadata{Name:kindnet-9phpm,Uid:60e9c45f-5176-492e-90c7-49b0201afe1e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722260370670906595,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-9phpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e9c45f-5176-492e-90c7-49b0201afe1e,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T13:28:46.149047133Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f2717ec1ae79803a690b6a2d0649001089025bae4c8cff3f073535c
63e3e877d,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b61cc52e-771b-484a-99d6-8963665cb1e8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722260370664777417,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61cc52e-771b-484a-99d6-8963665cb1e8,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath
\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T13:29:02.749157522Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fd64d3f03154a4e06ac82e12322275445023cc4fe165a137fd4e847380aeb121,Metadata:&PodSandboxMetadata{Name:etcd-ha-104111,Uid:80cb06508783f1cdddfbd3cd4c58d73c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722260370662951728,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cb06508783f1cdddfbd3cd4c58d73c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.120:2379,kubernetes.io/config.hash: 80cb06508783f1cdddfbd3cd4c58d73c,kubernetes.io/config.seen: 2024-07-29T13:28:33.197756524Z,kubernetes
.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fe3b7fb7cf5579d4a2607d42cee331c75364076d8e03c034b0015e7deb4fcc21,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-104111,Uid:2438b1d75fb1de3aa096517b67661add,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722260370654536985,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2438b1d75fb1de3aa096517b67661add,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2438b1d75fb1de3aa096517b67661add,kubernetes.io/config.seen: 2024-07-29T13:28:33.197755032Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fb6786851a9e9b5ef9c165e6c255e9262884c7e2ca9578cfbc4e403b2f05a484,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-9jrnl,Uid:0453ed97-efb4-41c1-8bfb-e7e004e618e0,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722260366639398770,L
abels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-9jrnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453ed97-efb4-41c1-8bfb-e7e004e618e0,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T13:29:02.747775089Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ab591310b6636199927f6aca9abcc3a68cb2149e7f2001e4ffcd7ce08d966de0,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-7xsjn,Uid:1fbdab29-6a6d-4b47-8df5-641b9aad98f0,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722259874094887119,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-7xsjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fbdab29-6a6d-4b47-8df5-641b9aad98f0,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T13:31:13.775614631Z,kubernetes.io/config.source
: api,},RuntimeHandler:,},&PodSandbox{Id:d0c4b0845fee9c8c5a409d1d96017b0e56e37a4fb5f685b4e69bc4626c12ffd6,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-9jrnl,Uid:0453ed97-efb4-41c1-8bfb-e7e004e618e0,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722259743070397300,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-9jrnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453ed97-efb4-41c1-8bfb-e7e004e618e0,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T13:29:02.747775089Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:32a1e9c01260e16fc517c9caebdd1556fbbfcaabc5ff83ba679e2ce763d3ee50,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-gcf7q,Uid:196981ba-ed16-427c-ae8b-9b7e8ff36be2,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722259743050028290,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: coredns-7db6d8ff4d-gcf7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196981ba-ed16-427c-ae8b-9b7e8ff36be2,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T13:29:02.742252134Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:60e3f945e9e89cf01f1a900e939ca51214ea0d79a2a69da731b49606960a6d05,Metadata:&PodSandboxMetadata{Name:kube-proxy-n6kkf,Uid:4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722259727970689395,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-n6kkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T13:28:46.158972524Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&
PodSandbox{Id:6b9961791d750fd9d9a7d40cf02ff0c0f6e938c724b2b0787ebeb23a431b9beb,Metadata:&PodSandboxMetadata{Name:kindnet-9phpm,Uid:60e9c45f-5176-492e-90c7-49b0201afe1e,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722259727960198640,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-9phpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e9c45f-5176-492e-90c7-49b0201afe1e,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T13:28:46.149047133Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3c847132555037fab395549f498d9f9aad2f651da1470981906bc62a560c615c,Metadata:&PodSandboxMetadata{Name:etcd-ha-104111,Uid:80cb06508783f1cdddfbd3cd4c58d73c,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722259706776085857,Labels:map[string]string{component: etcd,io.kubernetes.container.name:
POD,io.kubernetes.pod.name: etcd-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cb06508783f1cdddfbd3cd4c58d73c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.120:2379,kubernetes.io/config.hash: 80cb06508783f1cdddfbd3cd4c58d73c,kubernetes.io/config.seen: 2024-07-29T13:28:26.319083002Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a33809de7d3a6efb269fca0ca670a49eb3a11c9845507c3110c8509574ae03e0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-104111,Uid:2438b1d75fb1de3aa096517b67661add,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722259706770903228,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2438b1d75fb1de3aa096517b67661add,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2438b1d7
5fb1de3aa096517b67661add,kubernetes.io/config.seen: 2024-07-29T13:28:26.319081129Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=db927f11-9527-4f98-a3ed-7f34b51c1d4e name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 13:42:29 ha-104111 crio[3719]: time="2024-07-29 13:42:29.452306349Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=56a43c56-a778-4254-895f-7586b9f5ea7a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:42:29 ha-104111 crio[3719]: time="2024-07-29 13:42:29.452382933Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=56a43c56-a778-4254-895f-7586b9f5ea7a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:42:29 ha-104111 crio[3719]: time="2024-07-29 13:42:29.453130860Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:145852da7055033c14ba5cfc60ad0ea2687b2a74198afdedffbb8618b96d9b22,PodSandboxId:f2717ec1ae79803a690b6a2d0649001089025bae4c8cff3f073535c63e3e877d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260458255696085,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61cc52e-771b-484a-99d6-8963665cb1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 492a79cd,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d170d569040e71ebba12d18aa4488079f9988b7cd21fa4b35b0e965b490e63f4,PodSandboxId:8e5557e2793d9874f8b9c116d635669034711c4d2ad63fb97625509596ebf5aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722260417259734776,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdd0eae9701cf7d4013ed5835b6fc65,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a3169f4bbf4da4cbecfa9b2982c6bf76dcc3b02c756aafe16fcbd013e5ecbc,PodSandboxId:0903a4fd4a46c6933b98e378c71522c67f4979f66eb8f977c8a9f63c18e11e8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722260408269110931,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc73c358411265f24a0fdb288ab5434e,},Annotations:map[string]string{io.kubernetes.container.hash: fa230ea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae091ed0f6e9492fda87cc2caa224d81e5d4aac495f07c0b6a5c340ba7ed513f,PodSandboxId:f2717ec1ae79803a690b6a2d0649001089025bae4c8cff3f073535c63e3e877d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722260408251768596,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61cc52e-771b-484a-99d6-8963665cb1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 492a79cd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05bd816fab2d6a451651796d416a47ddea0ce0473c46f90b616d363041375e97,PodSandboxId:ab05b140cff1204b991bd7964a53af7a5b126994e94153a0ba209b48bd39a7a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722260404499337932,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7xsjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fbdab29-6a6d-4b47-8df5-641b9aad98f0,},Annotations:map[string]string{io.kubernetes.container.hash: 769a90,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ae26ae158cd144a916094ec9de5203551b5ddf380a59bb97405a628a465853,PodSandboxId:e0a4ba4b9a4396c40863b9763e6615ffac4ec97d757615f49da1dcb73e76dab9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722260383413902872,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cb1cfb5a4b597f16540b93a08b39fcb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73307e1396301309c14bd627fe5c2d872ed0db38a27877dbd101abca636145e,PodSandboxId:f939461f859dd217f1d4a7d930422b89163056a724667bffb7622618075366ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722260371445869052,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6kkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,},Annotations:map[string]string{io.kubernetes.container.hash: 32c1dc3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:b9f80c23aaa9ea40aa6a16faaf7c226865927c3b2060d3c13580ec1ea78239c8,PodSandboxId:9d24fc0c1911edb744055edeac92bccb8147a1cd2047d390c20d23df3915bda2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260371345361388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gcf7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196981ba-ed16-427c-ae8b-9b7e8ff36be2,},Annotations:map[string]string{io.kubernetes.container.hash: 4a624f81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5d35941c2b55304add09b92346c5ee66dae70db5d631b55a4c9e748a78cc1c,PodSandboxId:d67531e4a04283651aa18171636b9bca58978e45dc52287ab53b6defe85acbc3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722260371146403737,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9phpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e9c45f-5176-492e-90c7-49b0201afe1e,},Annotations:map[string]string{io.kubernetes.container.hash: 867b7308,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba3613e0502120e9000731192ebe7375f108449671e6f56de82157ff90cf4f29,PodSandboxId:fd64d3f03154a4e06ac82e12322275445023cc4fe165a137fd4e847380aeb121,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722260371248139179,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cb06508783f1cdddfbd3cd4c58d73c,},Annotations:map[string]string{io.kubernetes.container.hash: 9edecd9f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14f317d46e88a1b1f268e7cd9b0473d77ffe2525b92da2cf156d1ecd2c489c4,PodSandboxId:8e5557e2793d9874f8b9c116d635669034711c4d2ad63fb97625509596ebf5aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722260371087205050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdd0eae9701cf7d4013ed5835b6fc65,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ef5416acf6856c647ca6214560e4d2bfefeb7e1378e60628b77e748be400ac,PodSandboxId:0903a4fd4a46c6933b98e378c71522c67f4979f66eb8f977c8a9f63c18e11e8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722260371001033922,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc73c358411265f24a0fdb288ab5434e,},Annotations:map[string]string{io.kubernetes.container.hash: fa230ea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6682bba661cd30852620bb8cae70c5cb86f9db46e582a6b7776a9f88229b529b,PodSandboxId:fe3b7fb7cf5579d4a2607d42cee331c75364076d8e03c034b0015e7deb4fcc21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722260370948205305,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2438b1d75fb1de3aa096517b67661add,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aae34217228bbdcb590cfc9744c1b1b7f8689f1ba0cf1c5323d27762bf4d209,PodSandboxId:fb6786851a9e9b5ef9c165e6c255e9262884c7e2ca9578cfbc4e403b2f05a484,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260366765075543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9jrnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453ed97-efb4-41c1-8bfb-e7e004e618e0,},Annotations:map[string]string{io.kubernetes.container.hash: 72f497a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"n
ame\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2a033e8feb22aa74a1673da73b3c5bab08248299304e6a34a7f7064468eeb5d,PodSandboxId:ab591310b6636199927f6aca9abcc3a68cb2149e7f2001e4ffcd7ce08d966de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722259875257881458,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7xsjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fbdab29-6a6d-4b47-8df5-641b9aad98f0,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 769a90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721762ac4017a84454febe3fd71ab6671be9e230d7b785627b43bdafe8478d56,PodSandboxId:d0c4b0845fee9c8c5a409d1d96017b0e56e37a4fb5f685b4e69bc4626c12ffd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259743321120007,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9jrnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453ed97-efb4-41c1-8bfb-e7e004e618e0,},Annotations:map[string]string{io.kubernet
es.container.hash: 72f497a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81eca3ce5b15d81536c74dc285118c00ec013710df992caad1867b7c5e7f75a1,PodSandboxId:32a1e9c01260e16fc517c9caebdd1556fbbfcaabc5ff83ba679e2ce763d3ee50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259743268949875,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-7db6d8ff4d-gcf7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196981ba-ed16-427c-ae8b-9b7e8ff36be2,},Annotations:map[string]string{io.kubernetes.container.hash: 4a624f81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcba14c355c5fc17ddee3394c79f5ffaea681079c29edd33b122d7aa80c36f1,PodSandboxId:6b9961791d750fd9d9a7d40cf02ff0c0f6e938c724b2b0787ebeb23a431b9beb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722259731581881315,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9phpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e9c45f-5176-492e-90c7-49b0201afe1e,},Annotations:map[string]string{io.kubernetes.container.hash: 867b7308,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc357136c66b6120efe2eee1197a8d0dabec7279ff50cf8ddea25182b0d4ae8,PodSandboxId:60e3f945e9e89cf01f1a900e939ca51214ea0d79a2a69da731b49606960a6d05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722259728097871714,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6kkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,},Annotations:map[string]string{io.kubernetes.container.hash: 32c1dc3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e80af660361f5449de6286725b48cc816a32581672735b35e4ac2c55495983d1,PodSandboxId:a33809de7d3a6efb269fca0ca670a49eb3a11c9845507c3110c8509574ae03e0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722
eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722259707030052585,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2438b1d75fb1de3aa096517b67661add,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7606e1f107d6cded50cb09c9c101a2cac785cbcf697b2ffdecb599d2e148de2a,PodSandboxId:3c847132555037fab395549f498d9f9aad2f651da1470981906bc62a560c615c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd
477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722259706969359945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cb06508783f1cdddfbd3cd4c58d73c,},Annotations:map[string]string{io.kubernetes.container.hash: 9edecd9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=56a43c56-a778-4254-895f-7586b9f5ea7a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	145852da70550       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   f2717ec1ae798       storage-provisioner
	d170d569040e7       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Running             kube-controller-manager   2                   8e5557e2793d9       kube-controller-manager-ha-104111
	e2a3169f4bbf4       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Running             kube-apiserver            3                   0903a4fd4a46c       kube-apiserver-ha-104111
	ae091ed0f6e94       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   f2717ec1ae798       storage-provisioner
	05bd816fab2d6       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   ab05b140cff12       busybox-fc5497c4f-7xsjn
	a1ae26ae158cd       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   e0a4ba4b9a439       kube-vip-ha-104111
	e73307e139630       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      2 minutes ago        Running             kube-proxy                1                   f939461f859dd       kube-proxy-n6kkf
	b9f80c23aaa9e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   9d24fc0c1911e       coredns-7db6d8ff4d-gcf7q
	ba3613e050212       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   fd64d3f03154a       etcd-ha-104111
	af5d35941c2b5       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      2 minutes ago        Running             kindnet-cni               1                   d67531e4a0428       kindnet-9phpm
	d14f317d46e88       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Exited              kube-controller-manager   1                   8e5557e2793d9       kube-controller-manager-ha-104111
	56ef5416acf68       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Exited              kube-apiserver            2                   0903a4fd4a46c       kube-apiserver-ha-104111
	6682bba661cd3       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      2 minutes ago        Running             kube-scheduler            1                   fe3b7fb7cf557       kube-scheduler-ha-104111
	1aae34217228b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago        Running             coredns                   1                   fb6786851a9e9       coredns-7db6d8ff4d-9jrnl
	d2a033e8feb22       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   ab591310b6636       busybox-fc5497c4f-7xsjn
	721762ac4017a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   d0c4b0845fee9       coredns-7db6d8ff4d-9jrnl
	81eca3ce5b15d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   32a1e9c01260e       coredns-7db6d8ff4d-gcf7q
	8fcba14c355c5       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    13 minutes ago       Exited              kindnet-cni               0                   6b9961791d750       kindnet-9phpm
	6bc357136c66b       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago       Exited              kube-proxy                0                   60e3f945e9e89       kube-proxy-n6kkf
	e80af660361f5       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      14 minutes ago       Exited              kube-scheduler            0                   a33809de7d3a6       kube-scheduler-ha-104111
	7606e1f107d6c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago       Exited              etcd                      0                   3c84713255503       etcd-ha-104111
	
	
	==> coredns [1aae34217228bbdcb590cfc9744c1b1b7f8689f1ba0cf1c5323d27762bf4d209] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[2122936823]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 13:39:42.126) (total time: 10001ms):
	Trace[2122936823]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:39:52.127)
	Trace[2122936823]: [10.001302705s] [10.001302705s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:56460->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:56460->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [721762ac4017a84454febe3fd71ab6671be9e230d7b785627b43bdafe8478d56] <==
	[INFO] 10.244.2.2:35659 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00147984s
	[INFO] 10.244.2.2:53135 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000226257s
	[INFO] 10.244.2.2:49731 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189094s
	[INFO] 10.244.2.2:47456 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130859s
	[INFO] 10.244.2.2:41111 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123604s
	[INFO] 10.244.1.2:55083 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114636s
	[INFO] 10.244.1.2:48422 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00109487s
	[INFO] 10.244.0.4:39213 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116126s
	[INFO] 10.244.0.4:33260 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068728s
	[INFO] 10.244.2.2:48083 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166018s
	[INFO] 10.244.2.2:58646 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172185s
	[INFO] 10.244.2.2:35393 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009321s
	[INFO] 10.244.1.2:57222 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116426s
	[INFO] 10.244.0.4:60530 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165705s
	[INFO] 10.244.0.4:35848 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000187393s
	[INFO] 10.244.0.4:34740 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000104846s
	[INFO] 10.244.2.2:55008 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000235338s
	[INFO] 10.244.2.2:47084 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000152504s
	[INFO] 10.244.2.2:39329 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000115623s
	[INFO] 10.244.1.2:57485 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001155s
	[INFO] 10.244.1.2:42349 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000100298s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1878&timeout=8m49s&timeoutSeconds=529&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1878&timeout=7m54s&timeoutSeconds=474&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	
	
	==> coredns [81eca3ce5b15d81536c74dc285118c00ec013710df992caad1867b7c5e7f75a1] <==
	[INFO] 10.244.0.4:59749 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102743s
	[INFO] 10.244.0.4:46792 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124933s
	[INFO] 10.244.2.2:34901 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000159776s
	[INFO] 10.244.2.2:53333 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001076187s
	[INFO] 10.244.1.2:57672 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002003185s
	[INFO] 10.244.1.2:53227 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161629s
	[INFO] 10.244.1.2:38444 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092353s
	[INFO] 10.244.1.2:56499 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000211011s
	[INFO] 10.244.1.2:57556 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068457s
	[INFO] 10.244.1.2:34023 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109815s
	[INFO] 10.244.0.4:40329 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111231s
	[INFO] 10.244.0.4:38637 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005437s
	[INFO] 10.244.2.2:36810 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104372s
	[INFO] 10.244.1.2:53024 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122232s
	[INFO] 10.244.1.2:40257 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148245s
	[INFO] 10.244.1.2:41500 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080394s
	[INFO] 10.244.0.4:48915 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000151276s
	[INFO] 10.244.2.2:60231 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001284s
	[INFO] 10.244.1.2:33829 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154134s
	[INFO] 10.244.1.2:57945 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123045s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1833&timeout=5m39s&timeoutSeconds=339&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1875&timeout=9m18s&timeoutSeconds=558&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1878&timeout=8m21s&timeoutSeconds=501&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [b9f80c23aaa9ea40aa6a16faaf7c226865927c3b2060d3c13580ec1ea78239c8] <==
	Trace[2041965860]: [10.002316691s] [10.002316691s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1718781440]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 13:39:38.327) (total time: 10000ms):
	Trace[1718781440]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (13:39:48.328)
	Trace[1718781440]: [10.000982314s] [10.000982314s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-104111
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-104111
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=ha-104111
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T13_28_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:28:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-104111
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:42:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:40:10 +0000   Mon, 29 Jul 2024 13:28:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:40:10 +0000   Mon, 29 Jul 2024 13:28:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:40:10 +0000   Mon, 29 Jul 2024 13:28:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:40:10 +0000   Mon, 29 Jul 2024 13:29:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.120
	  Hostname:    ha-104111
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 613eb8d959344be3989ec50055edd8a7
	  System UUID:                613eb8d9-5934-4be3-989e-c50055edd8a7
	  Boot ID:                    5cf31ff2-8a2f-47f5-8440-f13293b7049d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7xsjn              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-9jrnl             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-gcf7q             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-104111                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-9phpm                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-104111             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-104111    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-n6kkf                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-104111             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-104111                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m12s              kube-proxy       
	  Normal   Starting                 13m                kube-proxy       
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node ha-104111 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node ha-104111 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node ha-104111 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     13m                kubelet          Node ha-104111 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                kubelet          Node ha-104111 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                kubelet          Node ha-104111 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                node-controller  Node ha-104111 event: Registered Node ha-104111 in Controller
	  Normal   NodeReady                13m                kubelet          Node ha-104111 status is now: NodeReady
	  Normal   RegisteredNode           12m                node-controller  Node ha-104111 event: Registered Node ha-104111 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-104111 event: Registered Node ha-104111 in Controller
	  Warning  ContainerGCFailed        3m56s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           2m6s               node-controller  Node ha-104111 event: Registered Node ha-104111 in Controller
	  Normal   RegisteredNode           2m1s               node-controller  Node ha-104111 event: Registered Node ha-104111 in Controller
	  Normal   RegisteredNode           28s                node-controller  Node ha-104111 event: Registered Node ha-104111 in Controller
	
	
	Name:               ha-104111-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-104111-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=ha-104111
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T13_29_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:29:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-104111-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:42:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:40:56 +0000   Mon, 29 Jul 2024 13:40:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:40:56 +0000   Mon, 29 Jul 2024 13:40:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:40:56 +0000   Mon, 29 Jul 2024 13:40:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:40:56 +0000   Mon, 29 Jul 2024 13:40:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.140
	  Hostname:    ha-104111-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0636dc68c5464326baedc11fd97131b2
	  System UUID:                0636dc68-c546-4326-baed-c11fd97131b2
	  Boot ID:                    6b6d38a1-f64b-49d6-b7a7-65634eab971a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-sf8mb                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-104111-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-njndz                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-104111-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-104111-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-5dnvv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-104111-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-104111-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m12s                  kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-104111-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-104111-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-104111-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-104111-m02 event: Registered Node ha-104111-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-104111-m02 event: Registered Node ha-104111-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-104111-m02 event: Registered Node ha-104111-m02 in Controller
	  Normal  NodeNotReady             9m19s                  node-controller  Node ha-104111-m02 status is now: NodeNotReady
	  Normal  Starting                 2m42s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m41s (x8 over 2m42s)  kubelet          Node ha-104111-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m41s (x8 over 2m42s)  kubelet          Node ha-104111-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m41s (x7 over 2m42s)  kubelet          Node ha-104111-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m7s                   node-controller  Node ha-104111-m02 event: Registered Node ha-104111-m02 in Controller
	  Normal  RegisteredNode           2m1s                   node-controller  Node ha-104111-m02 event: Registered Node ha-104111-m02 in Controller
	  Normal  RegisteredNode           28s                    node-controller  Node ha-104111-m02 event: Registered Node ha-104111-m02 in Controller
	
	
	Name:               ha-104111-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-104111-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=ha-104111
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T13_30_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:30:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-104111-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:42:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:42:07 +0000   Mon, 29 Jul 2024 13:41:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:42:07 +0000   Mon, 29 Jul 2024 13:41:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:42:07 +0000   Mon, 29 Jul 2024 13:41:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:42:07 +0000   Mon, 29 Jul 2024 13:41:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.202
	  Hostname:    ha-104111-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 60c8ff09952242e0a709074b86dabf4c
	  System UUID:                60c8ff09-9522-42e0-a709-074b86dabf4c
	  Boot ID:                    135320c5-22cf-4a5b-8b74-e80f23862115
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cbdn4                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-104111-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-mt9dk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-104111-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-104111-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-m765x                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-104111-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-104111-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 36s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-104111-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-104111-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-104111-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-104111-m03 event: Registered Node ha-104111-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-104111-m03 event: Registered Node ha-104111-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-104111-m03 event: Registered Node ha-104111-m03 in Controller
	  Normal   RegisteredNode           2m6s               node-controller  Node ha-104111-m03 event: Registered Node ha-104111-m03 in Controller
	  Normal   RegisteredNode           2m1s               node-controller  Node ha-104111-m03 event: Registered Node ha-104111-m03 in Controller
	  Normal   NodeNotReady             86s                node-controller  Node ha-104111-m03 status is now: NodeNotReady
	  Normal   Starting                 53s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  53s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  53s (x2 over 53s)  kubelet          Node ha-104111-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    53s (x2 over 53s)  kubelet          Node ha-104111-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     53s (x2 over 53s)  kubelet          Node ha-104111-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 53s                kubelet          Node ha-104111-m03 has been rebooted, boot id: 135320c5-22cf-4a5b-8b74-e80f23862115
	  Normal   NodeReady                53s                kubelet          Node ha-104111-m03 status is now: NodeReady
	  Normal   RegisteredNode           28s                node-controller  Node ha-104111-m03 event: Registered Node ha-104111-m03 in Controller
	
	
	Name:               ha-104111-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-104111-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=ha-104111
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T13_31_48_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:31:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-104111-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:42:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:42:21 +0000   Mon, 29 Jul 2024 13:42:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:42:21 +0000   Mon, 29 Jul 2024 13:42:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:42:21 +0000   Mon, 29 Jul 2024 13:42:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:42:21 +0000   Mon, 29 Jul 2024 13:42:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.40
	  Hostname:    ha-104111-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f6a0723aec74e89b187376957d3127c
	  System UUID:                4f6a0723-aec7-4e89-b187-376957d3127c
	  Boot ID:                    d938a38c-b54f-4872-be95-d0a289fd5060
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fbnbc       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-cmtgm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-104111-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-104111-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-104111-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-104111-m04 event: Registered Node ha-104111-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-104111-m04 event: Registered Node ha-104111-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-104111-m04 event: Registered Node ha-104111-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-104111-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m6s               node-controller  Node ha-104111-m04 event: Registered Node ha-104111-m04 in Controller
	  Normal   RegisteredNode           2m1s               node-controller  Node ha-104111-m04 event: Registered Node ha-104111-m04 in Controller
	  Normal   NodeNotReady             86s                node-controller  Node ha-104111-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           28s                node-controller  Node ha-104111-m04 event: Registered Node ha-104111-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 8s (x2 over 8s)    kubelet          Node ha-104111-m04 has been rebooted, boot id: d938a38c-b54f-4872-be95-d0a289fd5060
	  Normal   NodeHasSufficientMemory  8s (x3 over 8s)    kubelet          Node ha-104111-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x3 over 8s)    kubelet          Node ha-104111-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x3 over 8s)    kubelet          Node ha-104111-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             8s                 kubelet          Node ha-104111-m04 status is now: NodeNotReady
	  Normal   NodeReady                8s                 kubelet          Node ha-104111-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[ +10.840145] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.057324] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056430] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.166263] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.131459] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.268832] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.161114] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +3.971481] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.058723] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.299885] systemd-fstab-generator[1354]: Ignoring "noauto" option for root device
	[  +0.095957] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.972930] kauditd_printk_skb: 21 callbacks suppressed
	[Jul29 13:29] kauditd_printk_skb: 38 callbacks suppressed
	[ +36.765451] kauditd_printk_skb: 24 callbacks suppressed
	[Jul29 13:39] systemd-fstab-generator[3636]: Ignoring "noauto" option for root device
	[  +0.158398] systemd-fstab-generator[3648]: Ignoring "noauto" option for root device
	[  +0.175912] systemd-fstab-generator[3662]: Ignoring "noauto" option for root device
	[  +0.143053] systemd-fstab-generator[3674]: Ignoring "noauto" option for root device
	[  +0.282599] systemd-fstab-generator[3702]: Ignoring "noauto" option for root device
	[  +7.025238] systemd-fstab-generator[3806]: Ignoring "noauto" option for root device
	[  +0.093194] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.891546] kauditd_printk_skb: 22 callbacks suppressed
	[ +12.089559] kauditd_printk_skb: 75 callbacks suppressed
	[ +10.058994] kauditd_printk_skb: 1 callbacks suppressed
	[Jul29 13:40] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [7606e1f107d6cded50cb09c9c101a2cac785cbcf697b2ffdecb599d2e148de2a] <==
	2024/07/29 13:37:45 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-29T13:37:45.539706Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"678.363436ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-07-29T13:37:45.539724Z","caller":"traceutil/trace.go:171","msg":"trace[61922858] range","detail":"{range_begin:/registry/jobs/; range_end:/registry/jobs0; }","duration":"678.397244ms","start":"2024-07-29T13:37:44.861322Z","end":"2024-07-29T13:37:45.53972Z","steps":["trace[61922858] 'agreement among raft nodes before linearized reading'  (duration: 678.373825ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:37:45.539739Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T13:37:44.861316Z","time spent":"678.419577ms","remote":"127.0.0.1:45912","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":0,"response size":0,"request content":"key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:10000 "}
	2024/07/29 13:37:45 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-29T13:37:45.608466Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.120:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T13:37:45.608634Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.120:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T13:37:45.608768Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"af2c917f7a70ddd0","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-29T13:37:45.608951Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"27be60374a17232"}
	{"level":"info","ts":"2024-07-29T13:37:45.608997Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"27be60374a17232"}
	{"level":"info","ts":"2024-07-29T13:37:45.609024Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"27be60374a17232"}
	{"level":"info","ts":"2024-07-29T13:37:45.609063Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232"}
	{"level":"info","ts":"2024-07-29T13:37:45.609126Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232"}
	{"level":"info","ts":"2024-07-29T13:37:45.609179Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232"}
	{"level":"info","ts":"2024-07-29T13:37:45.60921Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"27be60374a17232"}
	{"level":"info","ts":"2024-07-29T13:37:45.609218Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:37:45.609226Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:37:45.609244Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:37:45.609323Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"af2c917f7a70ddd0","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:37:45.609349Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"af2c917f7a70ddd0","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:37:45.609391Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"af2c917f7a70ddd0","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:37:45.609419Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:37:45.612771Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.120:2380"}
	{"level":"info","ts":"2024-07-29T13:37:45.612904Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.120:2380"}
	{"level":"info","ts":"2024-07-29T13:37:45.61294Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-104111","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.120:2380"],"advertise-client-urls":["https://192.168.39.120:2379"]}
	
	
	==> etcd [ba3613e0502120e9000731192ebe7375f108449671e6f56de82157ff90cf4f29] <==
	{"level":"warn","ts":"2024-07-29T13:41:31.611023Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"9e64f0bb14d4f4a0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:41:31.621369Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"9e64f0bb14d4f4a0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:41:31.624371Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"9e64f0bb14d4f4a0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:41:31.627207Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"9e64f0bb14d4f4a0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:41:31.642458Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"9e64f0bb14d4f4a0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:41:31.695653Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"af2c917f7a70ddd0","from":"af2c917f7a70ddd0","remote-peer-id":"9e64f0bb14d4f4a0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T13:41:32.103632Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9e64f0bb14d4f4a0","rtt":"0s","error":"dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T13:41:32.103775Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9e64f0bb14d4f4a0","rtt":"0s","error":"dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T13:41:32.933305Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.202:2380/version","remote-member-id":"9e64f0bb14d4f4a0","error":"Get \"https://192.168.39.202:2380/version\": dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T13:41:32.933415Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9e64f0bb14d4f4a0","error":"Get \"https://192.168.39.202:2380/version\": dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T13:41:36.935889Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.202:2380/version","remote-member-id":"9e64f0bb14d4f4a0","error":"Get \"https://192.168.39.202:2380/version\": dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T13:41:36.936031Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9e64f0bb14d4f4a0","error":"Get \"https://192.168.39.202:2380/version\": dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T13:41:37.104132Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9e64f0bb14d4f4a0","rtt":"0s","error":"dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T13:41:37.104143Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9e64f0bb14d4f4a0","rtt":"0s","error":"dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T13:41:40.938212Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.202:2380/version","remote-member-id":"9e64f0bb14d4f4a0","error":"Get \"https://192.168.39.202:2380/version\": dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T13:41:40.938327Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9e64f0bb14d4f4a0","error":"Get \"https://192.168.39.202:2380/version\": dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T13:41:42.104974Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9e64f0bb14d4f4a0","rtt":"0s","error":"dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T13:41:42.10503Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9e64f0bb14d4f4a0","rtt":"0s","error":"dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-29T13:41:43.83702Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:41:43.838805Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"af2c917f7a70ddd0","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:41:43.838916Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"af2c917f7a70ddd0","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:41:43.848684Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"af2c917f7a70ddd0","to":"9e64f0bb14d4f4a0","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-29T13:41:43.848823Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"af2c917f7a70ddd0","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:41:43.84924Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"af2c917f7a70ddd0","to":"9e64f0bb14d4f4a0","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-29T13:41:43.849304Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"af2c917f7a70ddd0","remote-peer-id":"9e64f0bb14d4f4a0"}
	
	
	==> kernel <==
	 13:42:30 up 14 min,  0 users,  load average: 0.46, 0.48, 0.37
	Linux ha-104111 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8fcba14c355c5fc17ddee3394c79f5ffaea681079c29edd33b122d7aa80c36f1] <==
	I0729 13:37:12.634225       1 main.go:322] Node ha-104111-m04 has CIDR [10.244.3.0/24] 
	I0729 13:37:22.627786       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0729 13:37:22.627901       1 main.go:299] handling current node
	I0729 13:37:22.627931       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 13:37:22.627949       1 main.go:322] Node ha-104111-m02 has CIDR [10.244.1.0/24] 
	I0729 13:37:22.628134       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0729 13:37:22.628157       1 main.go:322] Node ha-104111-m03 has CIDR [10.244.2.0/24] 
	I0729 13:37:22.628222       1 main.go:295] Handling node with IPs: map[192.168.39.40:{}]
	I0729 13:37:22.628241       1 main.go:322] Node ha-104111-m04 has CIDR [10.244.3.0/24] 
	I0729 13:37:32.633218       1 main.go:295] Handling node with IPs: map[192.168.39.40:{}]
	I0729 13:37:32.633263       1 main.go:322] Node ha-104111-m04 has CIDR [10.244.3.0/24] 
	I0729 13:37:32.633476       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0729 13:37:32.633518       1 main.go:299] handling current node
	I0729 13:37:32.633541       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 13:37:32.633642       1 main.go:322] Node ha-104111-m02 has CIDR [10.244.1.0/24] 
	I0729 13:37:32.633743       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0729 13:37:32.633765       1 main.go:322] Node ha-104111-m03 has CIDR [10.244.2.0/24] 
	I0729 13:37:42.627963       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0729 13:37:42.628055       1 main.go:322] Node ha-104111-m03 has CIDR [10.244.2.0/24] 
	I0729 13:37:42.628233       1 main.go:295] Handling node with IPs: map[192.168.39.40:{}]
	I0729 13:37:42.628258       1 main.go:322] Node ha-104111-m04 has CIDR [10.244.3.0/24] 
	I0729 13:37:42.628327       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0729 13:37:42.628347       1 main.go:299] handling current node
	I0729 13:37:42.628369       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 13:37:42.628390       1 main.go:322] Node ha-104111-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [af5d35941c2b55304add09b92346c5ee66dae70db5d631b55a4c9e748a78cc1c] <==
	I0729 13:41:52.358222       1 main.go:322] Node ha-104111-m04 has CIDR [10.244.3.0/24] 
	I0729 13:42:02.349361       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0729 13:42:02.349423       1 main.go:299] handling current node
	I0729 13:42:02.349442       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 13:42:02.349451       1 main.go:322] Node ha-104111-m02 has CIDR [10.244.1.0/24] 
	I0729 13:42:02.349680       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0729 13:42:02.349697       1 main.go:322] Node ha-104111-m03 has CIDR [10.244.2.0/24] 
	I0729 13:42:02.349799       1 main.go:295] Handling node with IPs: map[192.168.39.40:{}]
	I0729 13:42:02.349831       1 main.go:322] Node ha-104111-m04 has CIDR [10.244.3.0/24] 
	I0729 13:42:12.352720       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0729 13:42:12.352801       1 main.go:299] handling current node
	I0729 13:42:12.352831       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 13:42:12.352837       1 main.go:322] Node ha-104111-m02 has CIDR [10.244.1.0/24] 
	I0729 13:42:12.353200       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0729 13:42:12.353301       1 main.go:322] Node ha-104111-m03 has CIDR [10.244.2.0/24] 
	I0729 13:42:12.353508       1 main.go:295] Handling node with IPs: map[192.168.39.40:{}]
	I0729 13:42:12.353659       1 main.go:322] Node ha-104111-m04 has CIDR [10.244.3.0/24] 
	I0729 13:42:22.349818       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0729 13:42:22.349946       1 main.go:322] Node ha-104111-m03 has CIDR [10.244.2.0/24] 
	I0729 13:42:22.350092       1 main.go:295] Handling node with IPs: map[192.168.39.40:{}]
	I0729 13:42:22.350116       1 main.go:322] Node ha-104111-m04 has CIDR [10.244.3.0/24] 
	I0729 13:42:22.350186       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0729 13:42:22.350206       1 main.go:299] handling current node
	I0729 13:42:22.350228       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 13:42:22.350244       1 main.go:322] Node ha-104111-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [56ef5416acf6856c647ca6214560e4d2bfefeb7e1378e60628b77e748be400ac] <==
	I0729 13:39:31.661209       1 options.go:221] external host was not specified, using 192.168.39.120
	I0729 13:39:31.666061       1 server.go:148] Version: v1.30.3
	I0729 13:39:31.666120       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:39:32.388673       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 13:39:32.403344       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 13:39:32.412927       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 13:39:32.412991       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 13:39:32.413182       1 instance.go:299] Using reconciler: lease
	W0729 13:39:52.383090       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0729 13:39:52.383496       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0729 13:39:52.414501       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [e2a3169f4bbf4da4cbecfa9b2982c6bf76dcc3b02c756aafe16fcbd013e5ecbc] <==
	I0729 13:40:10.545203       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0729 13:40:10.545237       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0729 13:40:10.623521       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 13:40:10.623535       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 13:40:10.623998       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 13:40:10.625193       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 13:40:10.626166       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 13:40:10.633944       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 13:40:10.634430       1 shared_informer.go:320] Caches are synced for configmaps
	W0729 13:40:10.642175       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.202]
	I0729 13:40:10.645945       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 13:40:10.645990       1 aggregator.go:165] initial CRD sync complete...
	I0729 13:40:10.646002       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 13:40:10.646007       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 13:40:10.646012       1 cache.go:39] Caches are synced for autoregister controller
	I0729 13:40:10.680084       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 13:40:10.683449       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 13:40:10.683486       1 policy_source.go:224] refreshing policies
	I0729 13:40:10.718829       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 13:40:10.743648       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 13:40:10.757684       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0729 13:40:10.767201       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0729 13:40:11.533181       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0729 13:40:11.996021       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.120 192.168.39.140 192.168.39.202]
	W0729 13:40:21.996038       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.120 192.168.39.140]
	
	
	==> kube-controller-manager [d14f317d46e88a1b1f268e7cd9b0473d77ffe2525b92da2cf156d1ecd2c489c4] <==
	I0729 13:39:32.804114       1 serving.go:380] Generated self-signed cert in-memory
	I0729 13:39:33.323118       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 13:39:33.323160       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:39:33.325115       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 13:39:33.325274       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 13:39:33.325784       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 13:39:33.325865       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0729 13:39:53.420626       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.120:8443/healthz\": dial tcp 192.168.39.120:8443: connect: connection refused"
	
	
	==> kube-controller-manager [d170d569040e71ebba12d18aa4488079f9988b7cd21fa4b35b0e965b490e63f4] <==
	I0729 13:40:28.409572       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0729 13:40:28.411185       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0729 13:40:28.412388       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0729 13:40:28.413767       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0729 13:40:28.430746       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0729 13:40:28.452321       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0729 13:40:28.902680       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 13:40:28.966635       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 13:40:28.966716       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 13:40:33.109723       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-cn4cp EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-cn4cp\": the object has been modified; please apply your changes to the latest version and try again"
	I0729 13:40:33.109848       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"f41dd6f6-52cd-42f9-a947-fbe62a033032", APIVersion:"v1", ResourceVersion:"291", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-cn4cp EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-cn4cp": the object has been modified; please apply your changes to the latest version and try again
	I0729 13:40:33.122530       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59.302481ms"
	I0729 13:40:33.122721       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="53.652µs"
	I0729 13:41:03.125096       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-cn4cp EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-cn4cp\": the object has been modified; please apply your changes to the latest version and try again"
	I0729 13:41:03.125183       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"f41dd6f6-52cd-42f9-a947-fbe62a033032", APIVersion:"v1", ResourceVersion:"291", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-cn4cp EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-cn4cp": the object has been modified; please apply your changes to the latest version and try again
	I0729 13:41:03.133843       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.366456ms"
	I0729 13:41:03.134244       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.862µs"
	E0729 13:41:03.150815       1 daemon_controller.go:324] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"1b2f4d50-64e4-4f03-b100-467ada02faf0", ResourceVersion:"2161", Generation:1, CreationTimestamp:time.Date(2024, time.July, 29, 13, 28, 33, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001ee4900), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January,
1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVo
lumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001ecfe00), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0008058f0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSV
olumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource
:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000805a10), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPer
sistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"registry.k8s.io/kube-proxy:v1.30.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001ee4940)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"k
ube-proxy", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001ed6de0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralCo
ntainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001eee528), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001232e80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.Preemption
Policy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001ee1500)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001eee580)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:4, NumberMisscheduled:0, DesiredNumberScheduled:4, NumberReady:4, ObservedGeneration:1, UpdatedNumberScheduled:4, NumberAvailable:4, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0729 13:41:03.188522       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="204.113µs"
	I0729 13:41:03.189300       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.822µs"
	I0729 13:41:37.306652       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="167.905µs"
	I0729 13:41:51.675006       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.092155ms"
	I0729 13:41:51.675315       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.322µs"
	I0729 13:42:21.649320       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-104111-m04"
	
	
	==> kube-proxy [6bc357136c66b6120efe2eee1197a8d0dabec7279ff50cf8ddea25182b0d4ae8] <==
	E0729 13:36:42.071518       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-104111&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 13:36:42.071769       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 13:36:42.071874       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 13:36:42.071750       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 13:36:42.071953       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 13:36:48.214410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 13:36:48.214504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 13:36:48.214762       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 13:36:48.214811       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 13:36:48.214874       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-104111&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 13:36:48.214920       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-104111&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 13:37:00.502492       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-104111&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 13:37:00.502492       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 13:37:00.503267       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-104111&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 13:37:00.503298       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 13:37:00.502618       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 13:37:00.503352       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 13:37:15.862231       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 13:37:15.862374       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 13:37:22.005975       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 13:37:22.006088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 13:37:28.150131       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-104111&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 13:37:28.150185       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-104111&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 13:37:43.509039       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 13:37:43.509092       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [e73307e1396301309c14bd627fe5c2d872ed0db38a27877dbd101abca636145e] <==
	I0729 13:39:32.726752       1 server_linux.go:69] "Using iptables proxy"
	E0729 13:39:34.101042       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-104111\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 13:39:37.173075       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-104111\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 13:39:40.245961       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-104111\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 13:39:46.389075       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-104111\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 13:39:58.677221       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-104111\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0729 13:40:17.538492       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.120"]
	I0729 13:40:17.592178       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 13:40:17.592261       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 13:40:17.592285       1 server_linux.go:165] "Using iptables Proxier"
	I0729 13:40:17.596694       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 13:40:17.597135       1 server.go:872] "Version info" version="v1.30.3"
	I0729 13:40:17.597240       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:40:17.600397       1 config.go:192] "Starting service config controller"
	I0729 13:40:17.600459       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 13:40:17.600499       1 config.go:101] "Starting endpoint slice config controller"
	I0729 13:40:17.600526       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 13:40:17.604150       1 config.go:319] "Starting node config controller"
	I0729 13:40:17.604188       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 13:40:17.701435       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 13:40:17.701533       1 shared_informer.go:320] Caches are synced for service config
	I0729 13:40:17.704397       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6682bba661cd30852620bb8cae70c5cb86f9db46e582a6b7776a9f88229b529b] <==
	W0729 13:40:01.935232       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.120:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.120:8443: connect: connection refused
	E0729 13:40:01.935303       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.120:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.120:8443: connect: connection refused
	W0729 13:40:02.035358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.120:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.120:8443: connect: connection refused
	E0729 13:40:02.035539       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.120:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.120:8443: connect: connection refused
	W0729 13:40:02.629461       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.120:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.120:8443: connect: connection refused
	E0729 13:40:02.629518       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.120:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.120:8443: connect: connection refused
	W0729 13:40:02.678262       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.120:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.120:8443: connect: connection refused
	E0729 13:40:02.678345       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.120:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.120:8443: connect: connection refused
	W0729 13:40:02.961108       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.120:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.120:8443: connect: connection refused
	E0729 13:40:02.961275       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.120:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.120:8443: connect: connection refused
	W0729 13:40:03.837337       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.120:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.120:8443: connect: connection refused
	E0729 13:40:03.837502       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.120:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.120:8443: connect: connection refused
	W0729 13:40:10.554926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 13:40:10.554988       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 13:40:10.555220       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 13:40:10.555264       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 13:40:10.555309       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 13:40:10.555341       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 13:40:10.555385       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 13:40:10.555412       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 13:40:10.555452       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 13:40:10.555477       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 13:40:10.555521       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 13:40:10.557633       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0729 13:40:15.124646       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e80af660361f5449de6286725b48cc816a32581672735b35e4ac2c55495983d1] <==
	W0729 13:37:41.287834       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 13:37:41.287933       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 13:37:41.524725       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 13:37:41.524775       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 13:37:41.642765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 13:37:41.642815       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 13:37:41.729604       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 13:37:41.729699       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 13:37:44.270342       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 13:37:44.270395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 13:37:44.297862       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 13:37:44.297919       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 13:37:44.411217       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 13:37:44.411334       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 13:37:44.424377       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 13:37:44.424434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 13:37:44.518120       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 13:37:44.518176       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 13:37:44.627616       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 13:37:44.627676       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 13:37:44.841290       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 13:37:44.841320       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 13:37:44.934709       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 13:37:44.934776       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 13:37:45.516615       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 13:40:10 ha-104111 kubelet[1361]: E0729 13:40:10.965005    1361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-104111?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Jul 29 13:40:10 ha-104111 kubelet[1361]: E0729 13:40:10.965192    1361 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-104111\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-104111?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 13:40:10 ha-104111 kubelet[1361]: I0729 13:40:10.965775    1361 status_manager.go:853] "Failed to get status for pod" podUID="4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1" pod="kube-system/kube-proxy-n6kkf" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n6kkf\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 13:40:17 ha-104111 kubelet[1361]: I0729 13:40:17.240699    1361 scope.go:117] "RemoveContainer" containerID="d14f317d46e88a1b1f268e7cd9b0473d77ffe2525b92da2cf156d1ecd2c489c4"
	Jul 29 13:40:23 ha-104111 kubelet[1361]: I0729 13:40:23.251464    1361 scope.go:117] "RemoveContainer" containerID="ae091ed0f6e9492fda87cc2caa224d81e5d4aac495f07c0b6a5c340ba7ed513f"
	Jul 29 13:40:23 ha-104111 kubelet[1361]: E0729 13:40:23.251955    1361 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b61cc52e-771b-484a-99d6-8963665cb1e8)\"" pod="kube-system/storage-provisioner" podUID="b61cc52e-771b-484a-99d6-8963665cb1e8"
	Jul 29 13:40:31 ha-104111 kubelet[1361]: I0729 13:40:31.108461    1361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-7xsjn" podStartSLOduration=557.174368686 podStartE2EDuration="9m18.108392939s" podCreationTimestamp="2024-07-29 13:31:13 +0000 UTC" firstStartedPulling="2024-07-29 13:31:14.31271906 +0000 UTC m=+161.218203599" lastFinishedPulling="2024-07-29 13:31:15.246743309 +0000 UTC m=+162.152227852" observedRunningTime="2024-07-29 13:31:15.989283605 +0000 UTC m=+162.894768166" watchObservedRunningTime="2024-07-29 13:40:31.108392939 +0000 UTC m=+718.013877498"
	Jul 29 13:40:33 ha-104111 kubelet[1361]: E0729 13:40:33.263899    1361 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:40:33 ha-104111 kubelet[1361]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:40:33 ha-104111 kubelet[1361]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:40:33 ha-104111 kubelet[1361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:40:33 ha-104111 kubelet[1361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:40:35 ha-104111 kubelet[1361]: I0729 13:40:35.242139    1361 scope.go:117] "RemoveContainer" containerID="ae091ed0f6e9492fda87cc2caa224d81e5d4aac495f07c0b6a5c340ba7ed513f"
	Jul 29 13:40:35 ha-104111 kubelet[1361]: E0729 13:40:35.242836    1361 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b61cc52e-771b-484a-99d6-8963665cb1e8)\"" pod="kube-system/storage-provisioner" podUID="b61cc52e-771b-484a-99d6-8963665cb1e8"
	Jul 29 13:40:47 ha-104111 kubelet[1361]: I0729 13:40:47.240035    1361 scope.go:117] "RemoveContainer" containerID="ae091ed0f6e9492fda87cc2caa224d81e5d4aac495f07c0b6a5c340ba7ed513f"
	Jul 29 13:40:47 ha-104111 kubelet[1361]: E0729 13:40:47.240684    1361 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b61cc52e-771b-484a-99d6-8963665cb1e8)\"" pod="kube-system/storage-provisioner" podUID="b61cc52e-771b-484a-99d6-8963665cb1e8"
	Jul 29 13:40:48 ha-104111 kubelet[1361]: I0729 13:40:48.240665    1361 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-104111" podUID="edfeb506-2884-4406-92cf-c35fce56d7c4"
	Jul 29 13:40:48 ha-104111 kubelet[1361]: I0729 13:40:48.259175    1361 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-104111"
	Jul 29 13:40:58 ha-104111 kubelet[1361]: I0729 13:40:58.240154    1361 scope.go:117] "RemoveContainer" containerID="ae091ed0f6e9492fda87cc2caa224d81e5d4aac495f07c0b6a5c340ba7ed513f"
	Jul 29 13:40:58 ha-104111 kubelet[1361]: I0729 13:40:58.967435    1361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-104111" podStartSLOduration=10.967411321 podStartE2EDuration="10.967411321s" podCreationTimestamp="2024-07-29 13:40:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-29 13:40:53.25891921 +0000 UTC m=+740.164403770" watchObservedRunningTime="2024-07-29 13:40:58.967411321 +0000 UTC m=+745.872895880"
	Jul 29 13:41:33 ha-104111 kubelet[1361]: E0729 13:41:33.268948    1361 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:41:33 ha-104111 kubelet[1361]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:41:33 ha-104111 kubelet[1361]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:41:33 ha-104111 kubelet[1361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:41:33 ha-104111 kubelet[1361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 13:42:28.973581 1000467 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19338-974764/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-104111 -n ha-104111
helpers_test.go:261: (dbg) Run:  kubectl --context ha-104111 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (408.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 stop -v=7 --alsologtostderr
E0729 13:44:30.662886  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-104111 stop -v=7 --alsologtostderr: exit status 82 (2m0.47906487s)

                                                
                                                
-- stdout --
	* Stopping node "ha-104111-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:42:48.710235 1000885 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:42:48.710379 1000885 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:42:48.710392 1000885 out.go:304] Setting ErrFile to fd 2...
	I0729 13:42:48.710397 1000885 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:42:48.710648 1000885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 13:42:48.710883 1000885 out.go:298] Setting JSON to false
	I0729 13:42:48.710960 1000885 mustload.go:65] Loading cluster: ha-104111
	I0729 13:42:48.711342 1000885 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:42:48.711434 1000885 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/config.json ...
	I0729 13:42:48.711612 1000885 mustload.go:65] Loading cluster: ha-104111
	I0729 13:42:48.711779 1000885 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:42:48.711825 1000885 stop.go:39] StopHost: ha-104111-m04
	I0729 13:42:48.712191 1000885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:42:48.712234 1000885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:42:48.728402 1000885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45649
	I0729 13:42:48.728885 1000885 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:42:48.729484 1000885 main.go:141] libmachine: Using API Version  1
	I0729 13:42:48.729511 1000885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:42:48.729891 1000885 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:42:48.732242 1000885 out.go:177] * Stopping node "ha-104111-m04"  ...
	I0729 13:42:48.733710 1000885 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 13:42:48.733748 1000885 main.go:141] libmachine: (ha-104111-m04) Calling .DriverName
	I0729 13:42:48.733995 1000885 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 13:42:48.734028 1000885 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHHostname
	I0729 13:42:48.737072 1000885 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:42:48.737595 1000885 main.go:141] libmachine: (ha-104111-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:31:bf", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:42:15 +0000 UTC Type:0 Mac:52:54:00:c2:31:bf Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-104111-m04 Clientid:01:52:54:00:c2:31:bf}
	I0729 13:42:48.737638 1000885 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:42:48.737765 1000885 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHPort
	I0729 13:42:48.737927 1000885 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHKeyPath
	I0729 13:42:48.738096 1000885 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHUsername
	I0729 13:42:48.738304 1000885 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m04/id_rsa Username:docker}
	I0729 13:42:48.826981 1000885 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 13:42:48.880019 1000885 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 13:42:48.932675 1000885 main.go:141] libmachine: Stopping "ha-104111-m04"...
	I0729 13:42:48.932705 1000885 main.go:141] libmachine: (ha-104111-m04) Calling .GetState
	I0729 13:42:48.934314 1000885 main.go:141] libmachine: (ha-104111-m04) Calling .Stop
	I0729 13:42:48.937478 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 0/120
	I0729 13:42:49.939221 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 1/120
	I0729 13:42:50.940457 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 2/120
	I0729 13:42:51.941763 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 3/120
	I0729 13:42:52.943002 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 4/120
	I0729 13:42:53.944569 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 5/120
	I0729 13:42:54.945913 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 6/120
	I0729 13:42:55.947105 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 7/120
	I0729 13:42:56.948319 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 8/120
	I0729 13:42:57.949630 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 9/120
	I0729 13:42:58.950855 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 10/120
	I0729 13:42:59.952256 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 11/120
	I0729 13:43:00.953550 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 12/120
	I0729 13:43:01.955115 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 13/120
	I0729 13:43:02.956451 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 14/120
	I0729 13:43:03.958555 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 15/120
	I0729 13:43:04.959914 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 16/120
	I0729 13:43:05.961309 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 17/120
	I0729 13:43:06.963425 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 18/120
	I0729 13:43:07.965371 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 19/120
	I0729 13:43:08.967486 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 20/120
	I0729 13:43:09.968728 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 21/120
	I0729 13:43:10.970304 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 22/120
	I0729 13:43:11.971592 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 23/120
	I0729 13:43:12.973303 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 24/120
	I0729 13:43:13.975269 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 25/120
	I0729 13:43:14.976747 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 26/120
	I0729 13:43:15.978224 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 27/120
	I0729 13:43:16.980228 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 28/120
	I0729 13:43:17.981693 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 29/120
	I0729 13:43:18.984083 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 30/120
	I0729 13:43:19.985505 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 31/120
	I0729 13:43:20.986903 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 32/120
	I0729 13:43:21.988370 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 33/120
	I0729 13:43:22.989842 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 34/120
	I0729 13:43:23.991745 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 35/120
	I0729 13:43:24.993094 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 36/120
	I0729 13:43:25.995037 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 37/120
	I0729 13:43:26.996457 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 38/120
	I0729 13:43:27.998100 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 39/120
	I0729 13:43:28.999762 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 40/120
	I0729 13:43:30.001602 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 41/120
	I0729 13:43:31.003990 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 42/120
	I0729 13:43:32.005606 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 43/120
	I0729 13:43:33.007076 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 44/120
	I0729 13:43:34.009191 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 45/120
	I0729 13:43:35.010546 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 46/120
	I0729 13:43:36.012000 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 47/120
	I0729 13:43:37.013279 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 48/120
	I0729 13:43:38.015438 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 49/120
	I0729 13:43:39.017485 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 50/120
	I0729 13:43:40.018870 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 51/120
	I0729 13:43:41.020209 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 52/120
	I0729 13:43:42.022225 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 53/120
	I0729 13:43:43.023479 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 54/120
	I0729 13:43:44.025376 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 55/120
	I0729 13:43:45.026913 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 56/120
	I0729 13:43:46.028875 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 57/120
	I0729 13:43:47.031279 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 58/120
	I0729 13:43:48.032884 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 59/120
	I0729 13:43:49.034819 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 60/120
	I0729 13:43:50.037173 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 61/120
	I0729 13:43:51.038995 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 62/120
	I0729 13:43:52.040918 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 63/120
	I0729 13:43:53.042279 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 64/120
	I0729 13:43:54.044034 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 65/120
	I0729 13:43:55.045514 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 66/120
	I0729 13:43:56.046752 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 67/120
	I0729 13:43:57.048149 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 68/120
	I0729 13:43:58.049511 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 69/120
	I0729 13:43:59.051837 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 70/120
	I0729 13:44:00.053899 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 71/120
	I0729 13:44:01.055359 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 72/120
	I0729 13:44:02.056668 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 73/120
	I0729 13:44:03.058869 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 74/120
	I0729 13:44:04.060885 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 75/120
	I0729 13:44:05.062152 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 76/120
	I0729 13:44:06.063697 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 77/120
	I0729 13:44:07.065042 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 78/120
	I0729 13:44:08.067430 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 79/120
	I0729 13:44:09.069475 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 80/120
	I0729 13:44:10.070865 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 81/120
	I0729 13:44:11.072186 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 82/120
	I0729 13:44:12.073621 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 83/120
	I0729 13:44:13.074869 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 84/120
	I0729 13:44:14.076755 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 85/120
	I0729 13:44:15.078099 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 86/120
	I0729 13:44:16.079538 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 87/120
	I0729 13:44:17.081086 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 88/120
	I0729 13:44:18.082602 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 89/120
	I0729 13:44:19.084920 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 90/120
	I0729 13:44:20.086296 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 91/120
	I0729 13:44:21.088131 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 92/120
	I0729 13:44:22.089637 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 93/120
	I0729 13:44:23.091275 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 94/120
	I0729 13:44:24.093391 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 95/120
	I0729 13:44:25.095003 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 96/120
	I0729 13:44:26.096504 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 97/120
	I0729 13:44:27.097816 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 98/120
	I0729 13:44:28.099157 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 99/120
	I0729 13:44:29.101191 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 100/120
	I0729 13:44:30.102925 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 101/120
	I0729 13:44:31.104853 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 102/120
	I0729 13:44:32.106848 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 103/120
	I0729 13:44:33.108106 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 104/120
	I0729 13:44:34.109467 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 105/120
	I0729 13:44:35.110870 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 106/120
	I0729 13:44:36.112182 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 107/120
	I0729 13:44:37.113524 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 108/120
	I0729 13:44:38.115558 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 109/120
	I0729 13:44:39.117675 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 110/120
	I0729 13:44:40.119166 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 111/120
	I0729 13:44:41.120752 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 112/120
	I0729 13:44:42.122171 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 113/120
	I0729 13:44:43.123848 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 114/120
	I0729 13:44:44.125727 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 115/120
	I0729 13:44:45.127789 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 116/120
	I0729 13:44:46.129039 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 117/120
	I0729 13:44:47.131121 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 118/120
	I0729 13:44:48.133259 1000885 main.go:141] libmachine: (ha-104111-m04) Waiting for machine to stop 119/120
	I0729 13:44:49.134573 1000885 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 13:44:49.134664 1000885 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 13:44:49.136700 1000885 out.go:177] 
	W0729 13:44:49.138139 1000885 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 13:44:49.138157 1000885 out.go:239] * 
	* 
	W0729 13:44:49.143356 1000885 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 13:44:49.144713 1000885 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-104111 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-104111 status -v=7 --alsologtostderr: exit status 3 (18.890433192s)

                                                
                                                
-- stdout --
	ha-104111
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-104111-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-104111-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:44:49.191736 1001315 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:44:49.192008 1001315 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:44:49.192017 1001315 out.go:304] Setting ErrFile to fd 2...
	I0729 13:44:49.192021 1001315 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:44:49.192177 1001315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 13:44:49.192335 1001315 out.go:298] Setting JSON to false
	I0729 13:44:49.192361 1001315 mustload.go:65] Loading cluster: ha-104111
	I0729 13:44:49.192470 1001315 notify.go:220] Checking for updates...
	I0729 13:44:49.192804 1001315 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:44:49.192832 1001315 status.go:255] checking status of ha-104111 ...
	I0729 13:44:49.193296 1001315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:44:49.193372 1001315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:44:49.211932 1001315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34787
	I0729 13:44:49.212316 1001315 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:44:49.213033 1001315 main.go:141] libmachine: Using API Version  1
	I0729 13:44:49.213066 1001315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:44:49.213386 1001315 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:44:49.213578 1001315 main.go:141] libmachine: (ha-104111) Calling .GetState
	I0729 13:44:49.215052 1001315 status.go:330] ha-104111 host status = "Running" (err=<nil>)
	I0729 13:44:49.215068 1001315 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:44:49.215354 1001315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:44:49.215399 1001315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:44:49.230807 1001315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36743
	I0729 13:44:49.231266 1001315 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:44:49.231714 1001315 main.go:141] libmachine: Using API Version  1
	I0729 13:44:49.231739 1001315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:44:49.232069 1001315 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:44:49.232279 1001315 main.go:141] libmachine: (ha-104111) Calling .GetIP
	I0729 13:44:49.235650 1001315 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:44:49.236109 1001315 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:44:49.236135 1001315 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:44:49.236313 1001315 host.go:66] Checking if "ha-104111" exists ...
	I0729 13:44:49.236662 1001315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:44:49.236719 1001315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:44:49.251291 1001315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40917
	I0729 13:44:49.251681 1001315 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:44:49.252131 1001315 main.go:141] libmachine: Using API Version  1
	I0729 13:44:49.252153 1001315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:44:49.252568 1001315 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:44:49.252814 1001315 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:44:49.253074 1001315 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:44:49.253103 1001315 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:44:49.255928 1001315 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:44:49.256490 1001315 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:44:49.256511 1001315 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:44:49.256552 1001315 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:44:49.256744 1001315 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:44:49.256893 1001315 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:44:49.257055 1001315 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:44:49.342018 1001315 ssh_runner.go:195] Run: systemctl --version
	I0729 13:44:49.348682 1001315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:44:49.365457 1001315 kubeconfig.go:125] found "ha-104111" server: "https://192.168.39.254:8443"
	I0729 13:44:49.365489 1001315 api_server.go:166] Checking apiserver status ...
	I0729 13:44:49.365524 1001315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:44:49.382046 1001315 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4948/cgroup
	W0729 13:44:49.392159 1001315 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4948/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:44:49.392217 1001315 ssh_runner.go:195] Run: ls
	I0729 13:44:49.396659 1001315 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 13:44:49.402738 1001315 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 13:44:49.402765 1001315 status.go:422] ha-104111 apiserver status = Running (err=<nil>)
	I0729 13:44:49.402777 1001315 status.go:257] ha-104111 status: &{Name:ha-104111 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:44:49.402792 1001315 status.go:255] checking status of ha-104111-m02 ...
	I0729 13:44:49.403192 1001315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:44:49.403237 1001315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:44:49.422125 1001315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39641
	I0729 13:44:49.422542 1001315 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:44:49.423065 1001315 main.go:141] libmachine: Using API Version  1
	I0729 13:44:49.423087 1001315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:44:49.423418 1001315 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:44:49.423596 1001315 main.go:141] libmachine: (ha-104111-m02) Calling .GetState
	I0729 13:44:49.425091 1001315 status.go:330] ha-104111-m02 host status = "Running" (err=<nil>)
	I0729 13:44:49.425110 1001315 host.go:66] Checking if "ha-104111-m02" exists ...
	I0729 13:44:49.425541 1001315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:44:49.425585 1001315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:44:49.440395 1001315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38015
	I0729 13:44:49.440881 1001315 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:44:49.441407 1001315 main.go:141] libmachine: Using API Version  1
	I0729 13:44:49.441431 1001315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:44:49.441790 1001315 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:44:49.441997 1001315 main.go:141] libmachine: (ha-104111-m02) Calling .GetIP
	I0729 13:44:49.444794 1001315 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:44:49.445198 1001315 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:39:36 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:44:49.445225 1001315 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:44:49.445372 1001315 host.go:66] Checking if "ha-104111-m02" exists ...
	I0729 13:44:49.445662 1001315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:44:49.445700 1001315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:44:49.461360 1001315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37285
	I0729 13:44:49.461766 1001315 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:44:49.462170 1001315 main.go:141] libmachine: Using API Version  1
	I0729 13:44:49.462191 1001315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:44:49.462507 1001315 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:44:49.462687 1001315 main.go:141] libmachine: (ha-104111-m02) Calling .DriverName
	I0729 13:44:49.462918 1001315 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:44:49.462940 1001315 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHHostname
	I0729 13:44:49.465804 1001315 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:44:49.466223 1001315 main.go:141] libmachine: (ha-104111-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c5:02", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:39:36 +0000 UTC Type:0 Mac:52:54:00:5b:c5:02 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-104111-m02 Clientid:01:52:54:00:5b:c5:02}
	I0729 13:44:49.466260 1001315 main.go:141] libmachine: (ha-104111-m02) DBG | domain ha-104111-m02 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:c5:02 in network mk-ha-104111
	I0729 13:44:49.466449 1001315 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHPort
	I0729 13:44:49.466634 1001315 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHKeyPath
	I0729 13:44:49.466786 1001315 main.go:141] libmachine: (ha-104111-m02) Calling .GetSSHUsername
	I0729 13:44:49.466922 1001315 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m02/id_rsa Username:docker}
	I0729 13:44:49.549401 1001315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:44:49.565355 1001315 kubeconfig.go:125] found "ha-104111" server: "https://192.168.39.254:8443"
	I0729 13:44:49.565392 1001315 api_server.go:166] Checking apiserver status ...
	I0729 13:44:49.565442 1001315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:44:49.580210 1001315 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1420/cgroup
	W0729 13:44:49.589354 1001315 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1420/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:44:49.589401 1001315 ssh_runner.go:195] Run: ls
	I0729 13:44:49.594227 1001315 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 13:44:49.598367 1001315 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 13:44:49.598389 1001315 status.go:422] ha-104111-m02 apiserver status = Running (err=<nil>)
	I0729 13:44:49.598398 1001315 status.go:257] ha-104111-m02 status: &{Name:ha-104111-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:44:49.598413 1001315 status.go:255] checking status of ha-104111-m04 ...
	I0729 13:44:49.598806 1001315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:44:49.598859 1001315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:44:49.614058 1001315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46747
	I0729 13:44:49.614533 1001315 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:44:49.615080 1001315 main.go:141] libmachine: Using API Version  1
	I0729 13:44:49.615107 1001315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:44:49.615415 1001315 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:44:49.615603 1001315 main.go:141] libmachine: (ha-104111-m04) Calling .GetState
	I0729 13:44:49.617093 1001315 status.go:330] ha-104111-m04 host status = "Running" (err=<nil>)
	I0729 13:44:49.617110 1001315 host.go:66] Checking if "ha-104111-m04" exists ...
	I0729 13:44:49.617373 1001315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:44:49.617403 1001315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:44:49.632131 1001315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37395
	I0729 13:44:49.632582 1001315 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:44:49.633022 1001315 main.go:141] libmachine: Using API Version  1
	I0729 13:44:49.633058 1001315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:44:49.633338 1001315 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:44:49.633506 1001315 main.go:141] libmachine: (ha-104111-m04) Calling .GetIP
	I0729 13:44:49.636171 1001315 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:44:49.636611 1001315 main.go:141] libmachine: (ha-104111-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:31:bf", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:42:15 +0000 UTC Type:0 Mac:52:54:00:c2:31:bf Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-104111-m04 Clientid:01:52:54:00:c2:31:bf}
	I0729 13:44:49.636638 1001315 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:44:49.636794 1001315 host.go:66] Checking if "ha-104111-m04" exists ...
	I0729 13:44:49.637080 1001315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:44:49.637112 1001315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:44:49.651651 1001315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44755
	I0729 13:44:49.652085 1001315 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:44:49.652542 1001315 main.go:141] libmachine: Using API Version  1
	I0729 13:44:49.652567 1001315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:44:49.652924 1001315 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:44:49.653129 1001315 main.go:141] libmachine: (ha-104111-m04) Calling .DriverName
	I0729 13:44:49.653334 1001315 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:44:49.653367 1001315 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHHostname
	I0729 13:44:49.656043 1001315 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:44:49.656461 1001315 main.go:141] libmachine: (ha-104111-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:31:bf", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:42:15 +0000 UTC Type:0 Mac:52:54:00:c2:31:bf Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-104111-m04 Clientid:01:52:54:00:c2:31:bf}
	I0729 13:44:49.656482 1001315 main.go:141] libmachine: (ha-104111-m04) DBG | domain ha-104111-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:c2:31:bf in network mk-ha-104111
	I0729 13:44:49.656628 1001315 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHPort
	I0729 13:44:49.656794 1001315 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHKeyPath
	I0729 13:44:49.656962 1001315 main.go:141] libmachine: (ha-104111-m04) Calling .GetSSHUsername
	I0729 13:44:49.657105 1001315 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111-m04/id_rsa Username:docker}
	W0729 13:45:08.036627 1001315 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.40:22: connect: no route to host
	W0729 13:45:08.036734 1001315 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.40:22: connect: no route to host
	E0729 13:45:08.036751 1001315 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.40:22: connect: no route to host
	I0729 13:45:08.036763 1001315 status.go:257] ha-104111-m04 status: &{Name:ha-104111-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0729 13:45:08.036805 1001315 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.40:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-104111 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-104111 -n ha-104111
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-104111 logs -n 25: (1.69020168s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-104111 ssh -n ha-104111-m02 sudo cat                                          | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-104111-m03_ha-104111-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-104111 cp ha-104111-m03:/home/docker/cp-test.txt                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04:/home/docker/cp-test_ha-104111-m03_ha-104111-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n ha-104111-m04 sudo cat                                          | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-104111-m03_ha-104111-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-104111 cp testdata/cp-test.txt                                                | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-104111 cp ha-104111-m04:/home/docker/cp-test.txt                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3327814908/001/cp-test_ha-104111-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-104111 cp ha-104111-m04:/home/docker/cp-test.txt                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111:/home/docker/cp-test_ha-104111-m04_ha-104111.txt                       |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n ha-104111 sudo cat                                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-104111-m04_ha-104111.txt                                 |           |         |         |                     |                     |
	| cp      | ha-104111 cp ha-104111-m04:/home/docker/cp-test.txt                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m02:/home/docker/cp-test_ha-104111-m04_ha-104111-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n ha-104111-m02 sudo cat                                          | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-104111-m04_ha-104111-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-104111 cp ha-104111-m04:/home/docker/cp-test.txt                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m03:/home/docker/cp-test_ha-104111-m04_ha-104111-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n                                                                 | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | ha-104111-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-104111 ssh -n ha-104111-m03 sudo cat                                          | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC | 29 Jul 24 13:32 UTC |
	|         | /home/docker/cp-test_ha-104111-m04_ha-104111-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-104111 node stop m02 -v=7                                                     | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-104111 node start m02 -v=7                                                    | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:34 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-104111 -v=7                                                           | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:35 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-104111 -v=7                                                                | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:35 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-104111 --wait=true -v=7                                                    | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:37 UTC | 29 Jul 24 13:42 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-104111                                                                | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:42 UTC |                     |
	| node    | ha-104111 node delete m03 -v=7                                                   | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:42 UTC | 29 Jul 24 13:42 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-104111 stop -v=7                                                              | ha-104111 | jenkins | v1.33.1 | 29 Jul 24 13:42 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 13:37:44
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 13:37:44.557508  999015 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:37:44.557762  999015 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:37:44.557772  999015 out.go:304] Setting ErrFile to fd 2...
	I0729 13:37:44.557776  999015 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:37:44.558031  999015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 13:37:44.558643  999015 out.go:298] Setting JSON to false
	I0729 13:37:44.559666  999015 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12017,"bootTime":1722248248,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:37:44.559728  999015 start.go:139] virtualization: kvm guest
	I0729 13:37:44.561760  999015 out.go:177] * [ha-104111] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:37:44.563055  999015 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 13:37:44.563111  999015 notify.go:220] Checking for updates...
	I0729 13:37:44.565495  999015 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:37:44.566845  999015 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 13:37:44.568270  999015 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 13:37:44.569583  999015 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:37:44.570695  999015 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:37:44.572177  999015 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:37:44.572305  999015 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:37:44.572777  999015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:37:44.572864  999015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:37:44.589115  999015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33997
	I0729 13:37:44.589509  999015 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:37:44.590171  999015 main.go:141] libmachine: Using API Version  1
	I0729 13:37:44.590195  999015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:37:44.590579  999015 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:37:44.590802  999015 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:37:44.625137  999015 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 13:37:44.626309  999015 start.go:297] selected driver: kvm2
	I0729 13:37:44.626324  999015 start.go:901] validating driver "kvm2" against &{Name:ha-104111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-104111 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:37:44.626459  999015 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:37:44.626771  999015 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:37:44.626837  999015 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19338-974764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 13:37:44.641639  999015 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 13:37:44.642302  999015 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:37:44.642332  999015 cni.go:84] Creating CNI manager for ""
	I0729 13:37:44.642340  999015 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 13:37:44.642413  999015 start.go:340] cluster config:
	{Name:ha-104111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-104111 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:37:44.642545  999015 iso.go:125] acquiring lock: {Name:mk2bc72146110e230952d77b90cad2ea8182c9d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:37:44.644113  999015 out.go:177] * Starting "ha-104111" primary control-plane node in "ha-104111" cluster
	I0729 13:37:44.645161  999015 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:37:44.645192  999015 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 13:37:44.645202  999015 cache.go:56] Caching tarball of preloaded images
	I0729 13:37:44.645265  999015 preload.go:172] Found /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 13:37:44.645275  999015 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 13:37:44.645393  999015 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/config.json ...
	I0729 13:37:44.645587  999015 start.go:360] acquireMachinesLock for ha-104111: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:37:44.645623  999015 start.go:364] duration metric: took 20.105µs to acquireMachinesLock for "ha-104111"
	I0729 13:37:44.645637  999015 start.go:96] Skipping create...Using existing machine configuration
	I0729 13:37:44.645644  999015 fix.go:54] fixHost starting: 
	I0729 13:37:44.645908  999015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:37:44.645940  999015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:37:44.659667  999015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39173
	I0729 13:37:44.660083  999015 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:37:44.660552  999015 main.go:141] libmachine: Using API Version  1
	I0729 13:37:44.660573  999015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:37:44.660958  999015 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:37:44.661139  999015 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:37:44.661294  999015 main.go:141] libmachine: (ha-104111) Calling .GetState
	I0729 13:37:44.662748  999015 fix.go:112] recreateIfNeeded on ha-104111: state=Running err=<nil>
	W0729 13:37:44.662787  999015 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 13:37:44.664558  999015 out.go:177] * Updating the running kvm2 "ha-104111" VM ...
	I0729 13:37:44.665772  999015 machine.go:94] provisionDockerMachine start ...
	I0729 13:37:44.665788  999015 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:37:44.665987  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:37:44.667977  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:44.668381  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:37:44.668460  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:44.668525  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:37:44.668680  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:37:44.668837  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:37:44.668977  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:37:44.669125  999015 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:44.669315  999015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 13:37:44.669327  999015 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 13:37:44.781590  999015 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-104111
	
	I0729 13:37:44.781633  999015 main.go:141] libmachine: (ha-104111) Calling .GetMachineName
	I0729 13:37:44.781915  999015 buildroot.go:166] provisioning hostname "ha-104111"
	I0729 13:37:44.781949  999015 main.go:141] libmachine: (ha-104111) Calling .GetMachineName
	I0729 13:37:44.782172  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:37:44.784931  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:44.785291  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:37:44.785316  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:44.785428  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:37:44.785633  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:37:44.785807  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:37:44.785978  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:37:44.786170  999015 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:44.786370  999015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 13:37:44.786388  999015 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-104111 && echo "ha-104111" | sudo tee /etc/hostname
	I0729 13:37:44.911175  999015 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-104111
	
	I0729 13:37:44.911201  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:37:44.913781  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:44.914138  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:37:44.914169  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:44.914327  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:37:44.914523  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:37:44.914696  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:37:44.914823  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:37:44.914992  999015 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:44.915253  999015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 13:37:44.915275  999015 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-104111' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-104111/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-104111' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:37:45.025302  999015 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:37:45.025335  999015 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 13:37:45.025400  999015 buildroot.go:174] setting up certificates
	I0729 13:37:45.025412  999015 provision.go:84] configureAuth start
	I0729 13:37:45.025426  999015 main.go:141] libmachine: (ha-104111) Calling .GetMachineName
	I0729 13:37:45.025717  999015 main.go:141] libmachine: (ha-104111) Calling .GetIP
	I0729 13:37:45.028316  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:45.028653  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:37:45.028681  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:45.028825  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:37:45.030976  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:45.031314  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:37:45.031340  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:45.031412  999015 provision.go:143] copyHostCerts
	I0729 13:37:45.031454  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 13:37:45.031496  999015 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 13:37:45.031507  999015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 13:37:45.031575  999015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 13:37:45.031683  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 13:37:45.031705  999015 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 13:37:45.031712  999015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 13:37:45.031736  999015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 13:37:45.031803  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 13:37:45.031819  999015 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 13:37:45.031832  999015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 13:37:45.031855  999015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 13:37:45.031915  999015 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.ha-104111 san=[127.0.0.1 192.168.39.120 ha-104111 localhost minikube]
	I0729 13:37:45.247889  999015 provision.go:177] copyRemoteCerts
	I0729 13:37:45.247955  999015 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:37:45.247986  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:37:45.250548  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:45.250858  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:37:45.250888  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:45.251015  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:37:45.251206  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:37:45.251342  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:37:45.251444  999015 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:37:45.334918  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 13:37:45.335025  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:37:45.360087  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 13:37:45.360167  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0729 13:37:45.385151  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 13:37:45.385217  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 13:37:45.408311  999015 provision.go:87] duration metric: took 382.883645ms to configureAuth
	I0729 13:37:45.408337  999015 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:37:45.408593  999015 config.go:182] Loaded profile config "ha-104111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:37:45.408671  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:37:45.411136  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:45.411465  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:37:45.411489  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:37:45.411662  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:37:45.411867  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:37:45.412039  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:37:45.412138  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:37:45.412303  999015 main.go:141] libmachine: Using SSH client type: native
	I0729 13:37:45.412500  999015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 13:37:45.412516  999015 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 13:39:16.207069  999015 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 13:39:16.207137  999015 machine.go:97] duration metric: took 1m31.541350835s to provisionDockerMachine
	I0729 13:39:16.207162  999015 start.go:293] postStartSetup for "ha-104111" (driver="kvm2")
	I0729 13:39:16.207178  999015 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:39:16.207202  999015 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:39:16.207611  999015 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:39:16.207655  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:39:16.210870  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:16.211350  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:39:16.211379  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:16.211565  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:39:16.211785  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:39:16.211970  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:39:16.212111  999015 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:39:16.299727  999015 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:39:16.304015  999015 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:39:16.304040  999015 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 13:39:16.304110  999015 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 13:39:16.304229  999015 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 13:39:16.304245  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> /etc/ssl/certs/9820462.pem
	I0729 13:39:16.304360  999015 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:39:16.313389  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 13:39:16.336977  999015 start.go:296] duration metric: took 129.801096ms for postStartSetup
	I0729 13:39:16.337021  999015 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:39:16.337808  999015 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0729 13:39:16.337845  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:39:16.341241  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:16.341577  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:39:16.341600  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:16.341785  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:39:16.341959  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:39:16.342087  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:39:16.342217  999015 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	W0729 13:39:16.426698  999015 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0729 13:39:16.426728  999015 fix.go:56] duration metric: took 1m31.781083968s for fixHost
	I0729 13:39:16.426750  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:39:16.429429  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:16.429861  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:39:16.429899  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:16.430076  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:39:16.430313  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:39:16.430497  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:39:16.430657  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:39:16.430806  999015 main.go:141] libmachine: Using SSH client type: native
	I0729 13:39:16.430995  999015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0729 13:39:16.431009  999015 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 13:39:16.540878  999015 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722260356.497294306
	
	I0729 13:39:16.540903  999015 fix.go:216] guest clock: 1722260356.497294306
	I0729 13:39:16.540912  999015 fix.go:229] Guest: 2024-07-29 13:39:16.497294306 +0000 UTC Remote: 2024-07-29 13:39:16.42673407 +0000 UTC m=+91.906013509 (delta=70.560236ms)
	I0729 13:39:16.540939  999015 fix.go:200] guest clock delta is within tolerance: 70.560236ms
	I0729 13:39:16.540948  999015 start.go:83] releasing machines lock for "ha-104111", held for 1m31.895313773s
	I0729 13:39:16.540978  999015 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:39:16.541226  999015 main.go:141] libmachine: (ha-104111) Calling .GetIP
	I0729 13:39:16.543728  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:16.544101  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:39:16.544121  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:16.544265  999015 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:39:16.544798  999015 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:39:16.544960  999015 main.go:141] libmachine: (ha-104111) Calling .DriverName
	I0729 13:39:16.545036  999015 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:39:16.545091  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:39:16.545189  999015 ssh_runner.go:195] Run: cat /version.json
	I0729 13:39:16.545213  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHHostname
	I0729 13:39:16.547611  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:16.547819  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:16.548020  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:39:16.548043  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:16.548196  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:39:16.548211  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:39:16.548224  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:16.548425  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:39:16.548440  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHPort
	I0729 13:39:16.548618  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHKeyPath
	I0729 13:39:16.548625  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:39:16.548798  999015 main.go:141] libmachine: (ha-104111) Calling .GetSSHUsername
	I0729 13:39:16.548795  999015 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:39:16.548915  999015 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/ha-104111/id_rsa Username:docker}
	I0729 13:39:16.650013  999015 ssh_runner.go:195] Run: systemctl --version
	I0729 13:39:16.656015  999015 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 13:39:16.819246  999015 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:39:16.825417  999015 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:39:16.825493  999015 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:39:16.835135  999015 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 13:39:16.835163  999015 start.go:495] detecting cgroup driver to use...
	I0729 13:39:16.835235  999015 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:39:16.855213  999015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:39:16.869132  999015 docker.go:217] disabling cri-docker service (if available) ...
	I0729 13:39:16.869201  999015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 13:39:16.883555  999015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 13:39:16.897616  999015 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 13:39:17.058805  999015 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 13:39:17.201726  999015 docker.go:233] disabling docker service ...
	I0729 13:39:17.201810  999015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 13:39:17.220066  999015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 13:39:17.234734  999015 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 13:39:17.378527  999015 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 13:39:17.519684  999015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 13:39:17.534499  999015 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:39:17.553557  999015 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 13:39:17.553632  999015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:39:17.564559  999015 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 13:39:17.564639  999015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:39:17.576170  999015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:39:17.586652  999015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:39:17.597012  999015 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:39:17.608509  999015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:39:17.619134  999015 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:39:17.630264  999015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 13:39:17.641178  999015 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:39:17.651662  999015 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:39:17.661699  999015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:39:17.805400  999015 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 13:39:24.335673  999015 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.530232076s)
	I0729 13:39:24.335711  999015 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 13:39:24.335761  999015 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 13:39:24.341250  999015 start.go:563] Will wait 60s for crictl version
	I0729 13:39:24.341327  999015 ssh_runner.go:195] Run: which crictl
	I0729 13:39:24.345105  999015 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 13:39:24.385651  999015 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 13:39:24.385761  999015 ssh_runner.go:195] Run: crio --version
	I0729 13:39:24.415826  999015 ssh_runner.go:195] Run: crio --version
	I0729 13:39:24.446246  999015 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 13:39:24.447746  999015 main.go:141] libmachine: (ha-104111) Calling .GetIP
	I0729 13:39:24.450264  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:24.450620  999015 main.go:141] libmachine: (ha-104111) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:4b:6b", ip: ""} in network mk-ha-104111: {Iface:virbr1 ExpiryTime:2024-07-29 14:28:04 +0000 UTC Type:0 Mac:52:54:00:44:4b:6b Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-104111 Clientid:01:52:54:00:44:4b:6b}
	I0729 13:39:24.450645  999015 main.go:141] libmachine: (ha-104111) DBG | domain ha-104111 has defined IP address 192.168.39.120 and MAC address 52:54:00:44:4b:6b in network mk-ha-104111
	I0729 13:39:24.450854  999015 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 13:39:24.455737  999015 kubeadm.go:883] updating cluster {Name:ha-104111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-104111 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 13:39:24.455900  999015 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 13:39:24.455966  999015 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:39:24.504605  999015 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:39:24.504627  999015 crio.go:433] Images already preloaded, skipping extraction
	I0729 13:39:24.504676  999015 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 13:39:24.541116  999015 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 13:39:24.541142  999015 cache_images.go:84] Images are preloaded, skipping loading
	I0729 13:39:24.541154  999015 kubeadm.go:934] updating node { 192.168.39.120 8443 v1.30.3 crio true true} ...
	I0729 13:39:24.541281  999015 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-104111 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-104111 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 13:39:24.541365  999015 ssh_runner.go:195] Run: crio config
	I0729 13:39:24.586689  999015 cni.go:84] Creating CNI manager for ""
	I0729 13:39:24.586707  999015 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 13:39:24.586717  999015 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 13:39:24.586751  999015 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.120 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-104111 NodeName:ha-104111 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 13:39:24.586918  999015 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-104111"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.120
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.120"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 13:39:24.586937  999015 kube-vip.go:115] generating kube-vip config ...
	I0729 13:39:24.586979  999015 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 13:39:24.598621  999015 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 13:39:24.598729  999015 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 13:39:24.598775  999015 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 13:39:24.608712  999015 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 13:39:24.608774  999015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 13:39:24.618073  999015 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 13:39:24.634958  999015 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 13:39:24.651274  999015 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 13:39:24.667064  999015 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 13:39:24.685546  999015 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 13:39:24.689642  999015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:39:24.839709  999015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 13:39:24.855145  999015 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111 for IP: 192.168.39.120
	I0729 13:39:24.855173  999015 certs.go:194] generating shared ca certs ...
	I0729 13:39:24.855197  999015 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:39:24.855376  999015 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 13:39:24.855428  999015 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 13:39:24.855444  999015 certs.go:256] generating profile certs ...
	I0729 13:39:24.855541  999015 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/client.key
	I0729 13:39:24.855591  999015 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.f7411b89
	I0729 13:39:24.855617  999015 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.f7411b89 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.120 192.168.39.140 192.168.39.202 192.168.39.254]
	I0729 13:39:25.455220  999015 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.f7411b89 ...
	I0729 13:39:25.455254  999015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.f7411b89: {Name:mkc892b43f1affb5d7cb9aed542c4f523db3f899 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:39:25.455430  999015 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.f7411b89 ...
	I0729 13:39:25.455443  999015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.f7411b89: {Name:mk588100037b213c74cb58afa74cbaa38d605002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:39:25.455509  999015 certs.go:381] copying /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt.f7411b89 -> /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt
	I0729 13:39:25.455670  999015 certs.go:385] copying /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key.f7411b89 -> /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key
	I0729 13:39:25.455819  999015 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key
	I0729 13:39:25.455836  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 13:39:25.455849  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 13:39:25.455862  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 13:39:25.455874  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 13:39:25.455885  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 13:39:25.455900  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 13:39:25.455911  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 13:39:25.455922  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 13:39:25.455972  999015 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 13:39:25.455999  999015 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 13:39:25.456010  999015 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 13:39:25.456029  999015 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 13:39:25.456055  999015 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 13:39:25.456071  999015 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 13:39:25.456109  999015 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 13:39:25.456134  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem -> /usr/share/ca-certificates/982046.pem
	I0729 13:39:25.456147  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> /usr/share/ca-certificates/9820462.pem
	I0729 13:39:25.456159  999015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:39:25.456883  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 13:39:25.483262  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 13:39:25.506979  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 13:39:25.530915  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 13:39:25.555704  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 13:39:25.580023  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 13:39:25.605011  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 13:39:25.629923  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/ha-104111/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 13:39:25.654336  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 13:39:25.678545  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 13:39:25.702249  999015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 13:39:25.727489  999015 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 13:39:25.744359  999015 ssh_runner.go:195] Run: openssl version
	I0729 13:39:25.750381  999015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 13:39:25.761422  999015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 13:39:25.766248  999015 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 13:39:25.766322  999015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 13:39:25.772261  999015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 13:39:25.781853  999015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 13:39:25.792847  999015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 13:39:25.797342  999015 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 13:39:25.797412  999015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 13:39:25.803282  999015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 13:39:25.812721  999015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 13:39:25.823697  999015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:39:25.828204  999015 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:39:25.828257  999015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 13:39:25.834118  999015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 13:39:25.843359  999015 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 13:39:25.847938  999015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 13:39:25.853595  999015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 13:39:25.859159  999015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 13:39:25.864829  999015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 13:39:25.870770  999015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 13:39:25.876341  999015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 13:39:25.881993  999015 kubeadm.go:392] StartCluster: {Name:ha-104111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-104111 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:39:25.882142  999015 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 13:39:25.882200  999015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 13:39:25.917898  999015 cri.go:89] found id: "2b0b7b8d86d18e00d4def7e6ddd73b2109b7288d493f9c89c24f94c090aef401"
	I0729 13:39:25.917930  999015 cri.go:89] found id: "bd4ab1bad8ca9aaebbf3bdfabd8a7687a660ee5c1b925250be9a30f1eaeaf8b6"
	I0729 13:39:25.917936  999015 cri.go:89] found id: "f147ff5fea55b8c95ea2194801e766e3c3a118adbcbb0aa920a07bf4dd04b550"
	I0729 13:39:25.917940  999015 cri.go:89] found id: "1b86114506804a9a93ef3ca6b2254579d26919db666922aefc5ccef849a81f98"
	I0729 13:39:25.917944  999015 cri.go:89] found id: "721762ac4017a84454febe3fd71ab6671be9e230d7b785627b43bdafe8478d56"
	I0729 13:39:25.917948  999015 cri.go:89] found id: "81eca3ce5b15d81536c74dc285118c00ec013710df992caad1867b7c5e7f75a1"
	I0729 13:39:25.917953  999015 cri.go:89] found id: "8fcba14c355c5fc17ddee3394c79f5ffaea681079c29edd33b122d7aa80c36f1"
	I0729 13:39:25.917956  999015 cri.go:89] found id: "6bc357136c66b6120efe2eee1197a8d0dabec7279ff50cf8ddea25182b0d4ae8"
	I0729 13:39:25.917960  999015 cri.go:89] found id: "50fe26dbcca1ab5b9d9fe7412b4950133069510ae36d9795ccd4866be11174cd"
	I0729 13:39:25.917969  999015 cri.go:89] found id: "e4cce61f41e5d4c78880527c2ae828948f376f275ed748786922e82b85a740b3"
	I0729 13:39:25.917973  999015 cri.go:89] found id: "8a9167ef54b81a6562000186bed478646762de7fef6329053c2018869987fdda"
	I0729 13:39:25.917977  999015 cri.go:89] found id: "e80af660361f5449de6286725b48cc816a32581672735b35e4ac2c55495983d1"
	I0729 13:39:25.917981  999015 cri.go:89] found id: "7606e1f107d6cded50cb09c9c101a2cac785cbcf697b2ffdecb599d2e148de2a"
	I0729 13:39:25.917985  999015 cri.go:89] found id: ""
	I0729 13:39:25.918042  999015 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 13:45:08 ha-104111 crio[3719]: time="2024-07-29 13:45:08.661441019Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722260708661399940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca1e2c23-0c60-4826-a6e5-26f447557e4a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:45:08 ha-104111 crio[3719]: time="2024-07-29 13:45:08.668165648Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ed61c5d-1f47-4c1a-a205-1e5a0119a2d0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:45:08 ha-104111 crio[3719]: time="2024-07-29 13:45:08.668412391Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ed61c5d-1f47-4c1a-a205-1e5a0119a2d0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:45:08 ha-104111 crio[3719]: time="2024-07-29 13:45:08.669392730Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:145852da7055033c14ba5cfc60ad0ea2687b2a74198afdedffbb8618b96d9b22,PodSandboxId:f2717ec1ae79803a690b6a2d0649001089025bae4c8cff3f073535c63e3e877d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260458255696085,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61cc52e-771b-484a-99d6-8963665cb1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 492a79cd,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d170d569040e71ebba12d18aa4488079f9988b7cd21fa4b35b0e965b490e63f4,PodSandboxId:8e5557e2793d9874f8b9c116d635669034711c4d2ad63fb97625509596ebf5aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722260417259734776,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdd0eae9701cf7d4013ed5835b6fc65,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a3169f4bbf4da4cbecfa9b2982c6bf76dcc3b02c756aafe16fcbd013e5ecbc,PodSandboxId:0903a4fd4a46c6933b98e378c71522c67f4979f66eb8f977c8a9f63c18e11e8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722260408269110931,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc73c358411265f24a0fdb288ab5434e,},Annotations:map[string]string{io.kubernetes.container.hash: fa230ea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae091ed0f6e9492fda87cc2caa224d81e5d4aac495f07c0b6a5c340ba7ed513f,PodSandboxId:f2717ec1ae79803a690b6a2d0649001089025bae4c8cff3f073535c63e3e877d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722260408251768596,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61cc52e-771b-484a-99d6-8963665cb1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 492a79cd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05bd816fab2d6a451651796d416a47ddea0ce0473c46f90b616d363041375e97,PodSandboxId:ab05b140cff1204b991bd7964a53af7a5b126994e94153a0ba209b48bd39a7a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722260404499337932,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7xsjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fbdab29-6a6d-4b47-8df5-641b9aad98f0,},Annotations:map[string]string{io.kubernetes.container.hash: 769a90,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ae26ae158cd144a916094ec9de5203551b5ddf380a59bb97405a628a465853,PodSandboxId:e0a4ba4b9a4396c40863b9763e6615ffac4ec97d757615f49da1dcb73e76dab9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722260383413902872,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cb1cfb5a4b597f16540b93a08b39fcb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73307e1396301309c14bd627fe5c2d872ed0db38a27877dbd101abca636145e,PodSandboxId:f939461f859dd217f1d4a7d930422b89163056a724667bffb7622618075366ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722260371445869052,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6kkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,},Annotations:map[string]string{io.kubernetes.container.hash: 32c1dc3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:b9f80c23aaa9ea40aa6a16faaf7c226865927c3b2060d3c13580ec1ea78239c8,PodSandboxId:9d24fc0c1911edb744055edeac92bccb8147a1cd2047d390c20d23df3915bda2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260371345361388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gcf7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196981ba-ed16-427c-ae8b-9b7e8ff36be2,},Annotations:map[string]string{io.kubernetes.container.hash: 4a624f81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5d35941c2b55304add09b92346c5ee66dae70db5d631b55a4c9e748a78cc1c,PodSandboxId:d67531e4a04283651aa18171636b9bca58978e45dc52287ab53b6defe85acbc3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722260371146403737,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9phpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e9c45f-5176-492e-90c7-49b0201afe1e,},Annotations:map[string]string{io.kubernetes.container.hash: 867b7308,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba3613e0502120e9000731192ebe7375f108449671e6f56de82157ff90cf4f29,PodSandboxId:fd64d3f03154a4e06ac82e12322275445023cc4fe165a137fd4e847380aeb121,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722260371248139179,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cb06508783f1cdddfbd3cd4c58d73c,},Annotations:map[string]string{io.kubernetes.container.hash: 9edecd9f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14f317d46e88a1b1f268e7cd9b0473d77ffe2525b92da2cf156d1ecd2c489c4,PodSandboxId:8e5557e2793d9874f8b9c116d635669034711c4d2ad63fb97625509596ebf5aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722260371087205050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdd0eae9701cf7d4013ed5835b6fc65,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ef5416acf6856c647ca6214560e4d2bfefeb7e1378e60628b77e748be400ac,PodSandboxId:0903a4fd4a46c6933b98e378c71522c67f4979f66eb8f977c8a9f63c18e11e8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722260371001033922,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc73c358411265f24a0fdb288ab5434e,},Annotations:map[string]string{io.kubernetes.container.hash: fa230ea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6682bba661cd30852620bb8cae70c5cb86f9db46e582a6b7776a9f88229b529b,PodSandboxId:fe3b7fb7cf5579d4a2607d42cee331c75364076d8e03c034b0015e7deb4fcc21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722260370948205305,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2438b1d75fb1de3aa096517b67661add,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aae34217228bbdcb590cfc9744c1b1b7f8689f1ba0cf1c5323d27762bf4d209,PodSandboxId:fb6786851a9e9b5ef9c165e6c255e9262884c7e2ca9578cfbc4e403b2f05a484,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260366765075543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9jrnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453ed97-efb4-41c1-8bfb-e7e004e618e0,},Annotations:map[string]string{io.kubernetes.container.hash: 72f497a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"n
ame\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2a033e8feb22aa74a1673da73b3c5bab08248299304e6a34a7f7064468eeb5d,PodSandboxId:ab591310b6636199927f6aca9abcc3a68cb2149e7f2001e4ffcd7ce08d966de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722259875257881458,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7xsjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fbdab29-6a6d-4b47-8df5-641b9aad98f0,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 769a90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721762ac4017a84454febe3fd71ab6671be9e230d7b785627b43bdafe8478d56,PodSandboxId:d0c4b0845fee9c8c5a409d1d96017b0e56e37a4fb5f685b4e69bc4626c12ffd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259743321120007,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9jrnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453ed97-efb4-41c1-8bfb-e7e004e618e0,},Annotations:map[string]string{io.kubernet
es.container.hash: 72f497a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81eca3ce5b15d81536c74dc285118c00ec013710df992caad1867b7c5e7f75a1,PodSandboxId:32a1e9c01260e16fc517c9caebdd1556fbbfcaabc5ff83ba679e2ce763d3ee50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259743268949875,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-7db6d8ff4d-gcf7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196981ba-ed16-427c-ae8b-9b7e8ff36be2,},Annotations:map[string]string{io.kubernetes.container.hash: 4a624f81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcba14c355c5fc17ddee3394c79f5ffaea681079c29edd33b122d7aa80c36f1,PodSandboxId:6b9961791d750fd9d9a7d40cf02ff0c0f6e938c724b2b0787ebeb23a431b9beb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722259731581881315,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9phpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e9c45f-5176-492e-90c7-49b0201afe1e,},Annotations:map[string]string{io.kubernetes.container.hash: 867b7308,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc357136c66b6120efe2eee1197a8d0dabec7279ff50cf8ddea25182b0d4ae8,PodSandboxId:60e3f945e9e89cf01f1a900e939ca51214ea0d79a2a69da731b49606960a6d05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722259728097871714,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6kkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,},Annotations:map[string]string{io.kubernetes.container.hash: 32c1dc3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e80af660361f5449de6286725b48cc816a32581672735b35e4ac2c55495983d1,PodSandboxId:a33809de7d3a6efb269fca0ca670a49eb3a11c9845507c3110c8509574ae03e0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722
eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722259707030052585,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2438b1d75fb1de3aa096517b67661add,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7606e1f107d6cded50cb09c9c101a2cac785cbcf697b2ffdecb599d2e148de2a,PodSandboxId:3c847132555037fab395549f498d9f9aad2f651da1470981906bc62a560c615c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd
477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722259706969359945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cb06508783f1cdddfbd3cd4c58d73c,},Annotations:map[string]string{io.kubernetes.container.hash: 9edecd9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ed61c5d-1f47-4c1a-a205-1e5a0119a2d0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:45:08 ha-104111 crio[3719]: time="2024-07-29 13:45:08.716980094Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d286f8cf-de52-46ba-81ef-0ab3e9d0977e name=/runtime.v1.RuntimeService/Version
	Jul 29 13:45:08 ha-104111 crio[3719]: time="2024-07-29 13:45:08.717051829Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d286f8cf-de52-46ba-81ef-0ab3e9d0977e name=/runtime.v1.RuntimeService/Version
	Jul 29 13:45:08 ha-104111 crio[3719]: time="2024-07-29 13:45:08.718259128Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aadd530b-6991-4903-926a-f9774b5d176f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:45:08 ha-104111 crio[3719]: time="2024-07-29 13:45:08.718780652Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722260708718757163,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aadd530b-6991-4903-926a-f9774b5d176f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:45:08 ha-104111 crio[3719]: time="2024-07-29 13:45:08.719432435Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be0f38e8-1f5f-4f28-b677-5dc415896a49 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:45:08 ha-104111 crio[3719]: time="2024-07-29 13:45:08.719485289Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be0f38e8-1f5f-4f28-b677-5dc415896a49 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:45:08 ha-104111 crio[3719]: time="2024-07-29 13:45:08.719972330Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:145852da7055033c14ba5cfc60ad0ea2687b2a74198afdedffbb8618b96d9b22,PodSandboxId:f2717ec1ae79803a690b6a2d0649001089025bae4c8cff3f073535c63e3e877d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260458255696085,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61cc52e-771b-484a-99d6-8963665cb1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 492a79cd,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d170d569040e71ebba12d18aa4488079f9988b7cd21fa4b35b0e965b490e63f4,PodSandboxId:8e5557e2793d9874f8b9c116d635669034711c4d2ad63fb97625509596ebf5aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722260417259734776,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdd0eae9701cf7d4013ed5835b6fc65,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a3169f4bbf4da4cbecfa9b2982c6bf76dcc3b02c756aafe16fcbd013e5ecbc,PodSandboxId:0903a4fd4a46c6933b98e378c71522c67f4979f66eb8f977c8a9f63c18e11e8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722260408269110931,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc73c358411265f24a0fdb288ab5434e,},Annotations:map[string]string{io.kubernetes.container.hash: fa230ea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae091ed0f6e9492fda87cc2caa224d81e5d4aac495f07c0b6a5c340ba7ed513f,PodSandboxId:f2717ec1ae79803a690b6a2d0649001089025bae4c8cff3f073535c63e3e877d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722260408251768596,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61cc52e-771b-484a-99d6-8963665cb1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 492a79cd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05bd816fab2d6a451651796d416a47ddea0ce0473c46f90b616d363041375e97,PodSandboxId:ab05b140cff1204b991bd7964a53af7a5b126994e94153a0ba209b48bd39a7a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722260404499337932,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7xsjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fbdab29-6a6d-4b47-8df5-641b9aad98f0,},Annotations:map[string]string{io.kubernetes.container.hash: 769a90,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ae26ae158cd144a916094ec9de5203551b5ddf380a59bb97405a628a465853,PodSandboxId:e0a4ba4b9a4396c40863b9763e6615ffac4ec97d757615f49da1dcb73e76dab9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722260383413902872,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cb1cfb5a4b597f16540b93a08b39fcb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73307e1396301309c14bd627fe5c2d872ed0db38a27877dbd101abca636145e,PodSandboxId:f939461f859dd217f1d4a7d930422b89163056a724667bffb7622618075366ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722260371445869052,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6kkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,},Annotations:map[string]string{io.kubernetes.container.hash: 32c1dc3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:b9f80c23aaa9ea40aa6a16faaf7c226865927c3b2060d3c13580ec1ea78239c8,PodSandboxId:9d24fc0c1911edb744055edeac92bccb8147a1cd2047d390c20d23df3915bda2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260371345361388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gcf7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196981ba-ed16-427c-ae8b-9b7e8ff36be2,},Annotations:map[string]string{io.kubernetes.container.hash: 4a624f81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5d35941c2b55304add09b92346c5ee66dae70db5d631b55a4c9e748a78cc1c,PodSandboxId:d67531e4a04283651aa18171636b9bca58978e45dc52287ab53b6defe85acbc3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722260371146403737,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9phpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e9c45f-5176-492e-90c7-49b0201afe1e,},Annotations:map[string]string{io.kubernetes.container.hash: 867b7308,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba3613e0502120e9000731192ebe7375f108449671e6f56de82157ff90cf4f29,PodSandboxId:fd64d3f03154a4e06ac82e12322275445023cc4fe165a137fd4e847380aeb121,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722260371248139179,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cb06508783f1cdddfbd3cd4c58d73c,},Annotations:map[string]string{io.kubernetes.container.hash: 9edecd9f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14f317d46e88a1b1f268e7cd9b0473d77ffe2525b92da2cf156d1ecd2c489c4,PodSandboxId:8e5557e2793d9874f8b9c116d635669034711c4d2ad63fb97625509596ebf5aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722260371087205050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdd0eae9701cf7d4013ed5835b6fc65,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ef5416acf6856c647ca6214560e4d2bfefeb7e1378e60628b77e748be400ac,PodSandboxId:0903a4fd4a46c6933b98e378c71522c67f4979f66eb8f977c8a9f63c18e11e8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722260371001033922,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc73c358411265f24a0fdb288ab5434e,},Annotations:map[string]string{io.kubernetes.container.hash: fa230ea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6682bba661cd30852620bb8cae70c5cb86f9db46e582a6b7776a9f88229b529b,PodSandboxId:fe3b7fb7cf5579d4a2607d42cee331c75364076d8e03c034b0015e7deb4fcc21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722260370948205305,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2438b1d75fb1de3aa096517b67661add,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aae34217228bbdcb590cfc9744c1b1b7f8689f1ba0cf1c5323d27762bf4d209,PodSandboxId:fb6786851a9e9b5ef9c165e6c255e9262884c7e2ca9578cfbc4e403b2f05a484,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260366765075543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9jrnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453ed97-efb4-41c1-8bfb-e7e004e618e0,},Annotations:map[string]string{io.kubernetes.container.hash: 72f497a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"n
ame\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2a033e8feb22aa74a1673da73b3c5bab08248299304e6a34a7f7064468eeb5d,PodSandboxId:ab591310b6636199927f6aca9abcc3a68cb2149e7f2001e4ffcd7ce08d966de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722259875257881458,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7xsjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fbdab29-6a6d-4b47-8df5-641b9aad98f0,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 769a90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721762ac4017a84454febe3fd71ab6671be9e230d7b785627b43bdafe8478d56,PodSandboxId:d0c4b0845fee9c8c5a409d1d96017b0e56e37a4fb5f685b4e69bc4626c12ffd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259743321120007,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9jrnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453ed97-efb4-41c1-8bfb-e7e004e618e0,},Annotations:map[string]string{io.kubernet
es.container.hash: 72f497a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81eca3ce5b15d81536c74dc285118c00ec013710df992caad1867b7c5e7f75a1,PodSandboxId:32a1e9c01260e16fc517c9caebdd1556fbbfcaabc5ff83ba679e2ce763d3ee50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259743268949875,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-7db6d8ff4d-gcf7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196981ba-ed16-427c-ae8b-9b7e8ff36be2,},Annotations:map[string]string{io.kubernetes.container.hash: 4a624f81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcba14c355c5fc17ddee3394c79f5ffaea681079c29edd33b122d7aa80c36f1,PodSandboxId:6b9961791d750fd9d9a7d40cf02ff0c0f6e938c724b2b0787ebeb23a431b9beb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722259731581881315,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9phpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e9c45f-5176-492e-90c7-49b0201afe1e,},Annotations:map[string]string{io.kubernetes.container.hash: 867b7308,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc357136c66b6120efe2eee1197a8d0dabec7279ff50cf8ddea25182b0d4ae8,PodSandboxId:60e3f945e9e89cf01f1a900e939ca51214ea0d79a2a69da731b49606960a6d05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722259728097871714,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6kkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,},Annotations:map[string]string{io.kubernetes.container.hash: 32c1dc3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e80af660361f5449de6286725b48cc816a32581672735b35e4ac2c55495983d1,PodSandboxId:a33809de7d3a6efb269fca0ca670a49eb3a11c9845507c3110c8509574ae03e0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722
eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722259707030052585,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2438b1d75fb1de3aa096517b67661add,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7606e1f107d6cded50cb09c9c101a2cac785cbcf697b2ffdecb599d2e148de2a,PodSandboxId:3c847132555037fab395549f498d9f9aad2f651da1470981906bc62a560c615c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd
477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722259706969359945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cb06508783f1cdddfbd3cd4c58d73c,},Annotations:map[string]string{io.kubernetes.container.hash: 9edecd9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be0f38e8-1f5f-4f28-b677-5dc415896a49 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:45:08 ha-104111 crio[3719]: time="2024-07-29 13:45:08.761148665Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e53c39f9-655d-4e06-821e-70bf07540eff name=/runtime.v1.RuntimeService/Version
	Jul 29 13:45:08 ha-104111 crio[3719]: time="2024-07-29 13:45:08.761234679Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e53c39f9-655d-4e06-821e-70bf07540eff name=/runtime.v1.RuntimeService/Version
	Jul 29 13:45:08 ha-104111 crio[3719]: time="2024-07-29 13:45:08.762281157Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9e3dc74d-70b6-4443-adfc-017e654ced05 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:45:08 ha-104111 crio[3719]: time="2024-07-29 13:45:08.762788880Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722260708762765253,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e3dc74d-70b6-4443-adfc-017e654ced05 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:45:08 ha-104111 crio[3719]: time="2024-07-29 13:45:08.763398808Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8362a496-83d0-4e0b-80c2-fffc449ccaf8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:45:08 ha-104111 crio[3719]: time="2024-07-29 13:45:08.763455920Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8362a496-83d0-4e0b-80c2-fffc449ccaf8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:45:08 ha-104111 crio[3719]: time="2024-07-29 13:45:08.763893859Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:145852da7055033c14ba5cfc60ad0ea2687b2a74198afdedffbb8618b96d9b22,PodSandboxId:f2717ec1ae79803a690b6a2d0649001089025bae4c8cff3f073535c63e3e877d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260458255696085,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61cc52e-771b-484a-99d6-8963665cb1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 492a79cd,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d170d569040e71ebba12d18aa4488079f9988b7cd21fa4b35b0e965b490e63f4,PodSandboxId:8e5557e2793d9874f8b9c116d635669034711c4d2ad63fb97625509596ebf5aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722260417259734776,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdd0eae9701cf7d4013ed5835b6fc65,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a3169f4bbf4da4cbecfa9b2982c6bf76dcc3b02c756aafe16fcbd013e5ecbc,PodSandboxId:0903a4fd4a46c6933b98e378c71522c67f4979f66eb8f977c8a9f63c18e11e8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722260408269110931,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc73c358411265f24a0fdb288ab5434e,},Annotations:map[string]string{io.kubernetes.container.hash: fa230ea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae091ed0f6e9492fda87cc2caa224d81e5d4aac495f07c0b6a5c340ba7ed513f,PodSandboxId:f2717ec1ae79803a690b6a2d0649001089025bae4c8cff3f073535c63e3e877d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722260408251768596,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61cc52e-771b-484a-99d6-8963665cb1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 492a79cd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05bd816fab2d6a451651796d416a47ddea0ce0473c46f90b616d363041375e97,PodSandboxId:ab05b140cff1204b991bd7964a53af7a5b126994e94153a0ba209b48bd39a7a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722260404499337932,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7xsjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fbdab29-6a6d-4b47-8df5-641b9aad98f0,},Annotations:map[string]string{io.kubernetes.container.hash: 769a90,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ae26ae158cd144a916094ec9de5203551b5ddf380a59bb97405a628a465853,PodSandboxId:e0a4ba4b9a4396c40863b9763e6615ffac4ec97d757615f49da1dcb73e76dab9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722260383413902872,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cb1cfb5a4b597f16540b93a08b39fcb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73307e1396301309c14bd627fe5c2d872ed0db38a27877dbd101abca636145e,PodSandboxId:f939461f859dd217f1d4a7d930422b89163056a724667bffb7622618075366ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722260371445869052,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6kkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,},Annotations:map[string]string{io.kubernetes.container.hash: 32c1dc3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:b9f80c23aaa9ea40aa6a16faaf7c226865927c3b2060d3c13580ec1ea78239c8,PodSandboxId:9d24fc0c1911edb744055edeac92bccb8147a1cd2047d390c20d23df3915bda2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260371345361388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gcf7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196981ba-ed16-427c-ae8b-9b7e8ff36be2,},Annotations:map[string]string{io.kubernetes.container.hash: 4a624f81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5d35941c2b55304add09b92346c5ee66dae70db5d631b55a4c9e748a78cc1c,PodSandboxId:d67531e4a04283651aa18171636b9bca58978e45dc52287ab53b6defe85acbc3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722260371146403737,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9phpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e9c45f-5176-492e-90c7-49b0201afe1e,},Annotations:map[string]string{io.kubernetes.container.hash: 867b7308,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba3613e0502120e9000731192ebe7375f108449671e6f56de82157ff90cf4f29,PodSandboxId:fd64d3f03154a4e06ac82e12322275445023cc4fe165a137fd4e847380aeb121,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722260371248139179,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cb06508783f1cdddfbd3cd4c58d73c,},Annotations:map[string]string{io.kubernetes.container.hash: 9edecd9f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14f317d46e88a1b1f268e7cd9b0473d77ffe2525b92da2cf156d1ecd2c489c4,PodSandboxId:8e5557e2793d9874f8b9c116d635669034711c4d2ad63fb97625509596ebf5aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722260371087205050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdd0eae9701cf7d4013ed5835b6fc65,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ef5416acf6856c647ca6214560e4d2bfefeb7e1378e60628b77e748be400ac,PodSandboxId:0903a4fd4a46c6933b98e378c71522c67f4979f66eb8f977c8a9f63c18e11e8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722260371001033922,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc73c358411265f24a0fdb288ab5434e,},Annotations:map[string]string{io.kubernetes.container.hash: fa230ea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6682bba661cd30852620bb8cae70c5cb86f9db46e582a6b7776a9f88229b529b,PodSandboxId:fe3b7fb7cf5579d4a2607d42cee331c75364076d8e03c034b0015e7deb4fcc21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722260370948205305,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2438b1d75fb1de3aa096517b67661add,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aae34217228bbdcb590cfc9744c1b1b7f8689f1ba0cf1c5323d27762bf4d209,PodSandboxId:fb6786851a9e9b5ef9c165e6c255e9262884c7e2ca9578cfbc4e403b2f05a484,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260366765075543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9jrnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453ed97-efb4-41c1-8bfb-e7e004e618e0,},Annotations:map[string]string{io.kubernetes.container.hash: 72f497a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"n
ame\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2a033e8feb22aa74a1673da73b3c5bab08248299304e6a34a7f7064468eeb5d,PodSandboxId:ab591310b6636199927f6aca9abcc3a68cb2149e7f2001e4ffcd7ce08d966de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722259875257881458,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7xsjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fbdab29-6a6d-4b47-8df5-641b9aad98f0,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 769a90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721762ac4017a84454febe3fd71ab6671be9e230d7b785627b43bdafe8478d56,PodSandboxId:d0c4b0845fee9c8c5a409d1d96017b0e56e37a4fb5f685b4e69bc4626c12ffd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259743321120007,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9jrnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453ed97-efb4-41c1-8bfb-e7e004e618e0,},Annotations:map[string]string{io.kubernet
es.container.hash: 72f497a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81eca3ce5b15d81536c74dc285118c00ec013710df992caad1867b7c5e7f75a1,PodSandboxId:32a1e9c01260e16fc517c9caebdd1556fbbfcaabc5ff83ba679e2ce763d3ee50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259743268949875,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-7db6d8ff4d-gcf7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196981ba-ed16-427c-ae8b-9b7e8ff36be2,},Annotations:map[string]string{io.kubernetes.container.hash: 4a624f81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcba14c355c5fc17ddee3394c79f5ffaea681079c29edd33b122d7aa80c36f1,PodSandboxId:6b9961791d750fd9d9a7d40cf02ff0c0f6e938c724b2b0787ebeb23a431b9beb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722259731581881315,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9phpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e9c45f-5176-492e-90c7-49b0201afe1e,},Annotations:map[string]string{io.kubernetes.container.hash: 867b7308,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc357136c66b6120efe2eee1197a8d0dabec7279ff50cf8ddea25182b0d4ae8,PodSandboxId:60e3f945e9e89cf01f1a900e939ca51214ea0d79a2a69da731b49606960a6d05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722259728097871714,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6kkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,},Annotations:map[string]string{io.kubernetes.container.hash: 32c1dc3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e80af660361f5449de6286725b48cc816a32581672735b35e4ac2c55495983d1,PodSandboxId:a33809de7d3a6efb269fca0ca670a49eb3a11c9845507c3110c8509574ae03e0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722
eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722259707030052585,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2438b1d75fb1de3aa096517b67661add,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7606e1f107d6cded50cb09c9c101a2cac785cbcf697b2ffdecb599d2e148de2a,PodSandboxId:3c847132555037fab395549f498d9f9aad2f651da1470981906bc62a560c615c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd
477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722259706969359945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cb06508783f1cdddfbd3cd4c58d73c,},Annotations:map[string]string{io.kubernetes.container.hash: 9edecd9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8362a496-83d0-4e0b-80c2-fffc449ccaf8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:45:08 ha-104111 crio[3719]: time="2024-07-29 13:45:08.804896526Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c63ca100-8068-4949-aac8-9e1ff92e057b name=/runtime.v1.RuntimeService/Version
	Jul 29 13:45:08 ha-104111 crio[3719]: time="2024-07-29 13:45:08.804963435Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c63ca100-8068-4949-aac8-9e1ff92e057b name=/runtime.v1.RuntimeService/Version
	Jul 29 13:45:08 ha-104111 crio[3719]: time="2024-07-29 13:45:08.806288783Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bc04ef31-208c-479d-8870-d5d4026197f1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:45:08 ha-104111 crio[3719]: time="2024-07-29 13:45:08.806946777Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722260708806894945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc04ef31-208c-479d-8870-d5d4026197f1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 13:45:08 ha-104111 crio[3719]: time="2024-07-29 13:45:08.807422554Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=385f3ea0-f54c-43e3-8bd8-4ae779791385 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:45:08 ha-104111 crio[3719]: time="2024-07-29 13:45:08.807470795Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=385f3ea0-f54c-43e3-8bd8-4ae779791385 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 13:45:08 ha-104111 crio[3719]: time="2024-07-29 13:45:08.808050428Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:145852da7055033c14ba5cfc60ad0ea2687b2a74198afdedffbb8618b96d9b22,PodSandboxId:f2717ec1ae79803a690b6a2d0649001089025bae4c8cff3f073535c63e3e877d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722260458255696085,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61cc52e-771b-484a-99d6-8963665cb1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 492a79cd,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d170d569040e71ebba12d18aa4488079f9988b7cd21fa4b35b0e965b490e63f4,PodSandboxId:8e5557e2793d9874f8b9c116d635669034711c4d2ad63fb97625509596ebf5aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722260417259734776,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdd0eae9701cf7d4013ed5835b6fc65,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a3169f4bbf4da4cbecfa9b2982c6bf76dcc3b02c756aafe16fcbd013e5ecbc,PodSandboxId:0903a4fd4a46c6933b98e378c71522c67f4979f66eb8f977c8a9f63c18e11e8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722260408269110931,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc73c358411265f24a0fdb288ab5434e,},Annotations:map[string]string{io.kubernetes.container.hash: fa230ea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae091ed0f6e9492fda87cc2caa224d81e5d4aac495f07c0b6a5c340ba7ed513f,PodSandboxId:f2717ec1ae79803a690b6a2d0649001089025bae4c8cff3f073535c63e3e877d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722260408251768596,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61cc52e-771b-484a-99d6-8963665cb1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 492a79cd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05bd816fab2d6a451651796d416a47ddea0ce0473c46f90b616d363041375e97,PodSandboxId:ab05b140cff1204b991bd7964a53af7a5b126994e94153a0ba209b48bd39a7a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722260404499337932,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7xsjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fbdab29-6a6d-4b47-8df5-641b9aad98f0,},Annotations:map[string]string{io.kubernetes.container.hash: 769a90,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ae26ae158cd144a916094ec9de5203551b5ddf380a59bb97405a628a465853,PodSandboxId:e0a4ba4b9a4396c40863b9763e6615ffac4ec97d757615f49da1dcb73e76dab9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722260383413902872,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cb1cfb5a4b597f16540b93a08b39fcb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73307e1396301309c14bd627fe5c2d872ed0db38a27877dbd101abca636145e,PodSandboxId:f939461f859dd217f1d4a7d930422b89163056a724667bffb7622618075366ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722260371445869052,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6kkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,},Annotations:map[string]string{io.kubernetes.container.hash: 32c1dc3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:b9f80c23aaa9ea40aa6a16faaf7c226865927c3b2060d3c13580ec1ea78239c8,PodSandboxId:9d24fc0c1911edb744055edeac92bccb8147a1cd2047d390c20d23df3915bda2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260371345361388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gcf7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196981ba-ed16-427c-ae8b-9b7e8ff36be2,},Annotations:map[string]string{io.kubernetes.container.hash: 4a624f81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5d35941c2b55304add09b92346c5ee66dae70db5d631b55a4c9e748a78cc1c,PodSandboxId:d67531e4a04283651aa18171636b9bca58978e45dc52287ab53b6defe85acbc3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722260371146403737,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9phpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e9c45f-5176-492e-90c7-49b0201afe1e,},Annotations:map[string]string{io.kubernetes.container.hash: 867b7308,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba3613e0502120e9000731192ebe7375f108449671e6f56de82157ff90cf4f29,PodSandboxId:fd64d3f03154a4e06ac82e12322275445023cc4fe165a137fd4e847380aeb121,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722260371248139179,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cb06508783f1cdddfbd3cd4c58d73c,},Annotations:map[string]string{io.kubernetes.container.hash: 9edecd9f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14f317d46e88a1b1f268e7cd9b0473d77ffe2525b92da2cf156d1ecd2c489c4,PodSandboxId:8e5557e2793d9874f8b9c116d635669034711c4d2ad63fb97625509596ebf5aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722260371087205050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdd0eae9701cf7d4013ed5835b6fc65,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ef5416acf6856c647ca6214560e4d2bfefeb7e1378e60628b77e748be400ac,PodSandboxId:0903a4fd4a46c6933b98e378c71522c67f4979f66eb8f977c8a9f63c18e11e8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722260371001033922,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc73c358411265f24a0fdb288ab5434e,},Annotations:map[string]string{io.kubernetes.container.hash: fa230ea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6682bba661cd30852620bb8cae70c5cb86f9db46e582a6b7776a9f88229b529b,PodSandboxId:fe3b7fb7cf5579d4a2607d42cee331c75364076d8e03c034b0015e7deb4fcc21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722260370948205305,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2438b1d75fb1de3aa096517b67661add,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aae34217228bbdcb590cfc9744c1b1b7f8689f1ba0cf1c5323d27762bf4d209,PodSandboxId:fb6786851a9e9b5ef9c165e6c255e9262884c7e2ca9578cfbc4e403b2f05a484,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722260366765075543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9jrnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453ed97-efb4-41c1-8bfb-e7e004e618e0,},Annotations:map[string]string{io.kubernetes.container.hash: 72f497a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"n
ame\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2a033e8feb22aa74a1673da73b3c5bab08248299304e6a34a7f7064468eeb5d,PodSandboxId:ab591310b6636199927f6aca9abcc3a68cb2149e7f2001e4ffcd7ce08d966de0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722259875257881458,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7xsjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fbdab29-6a6d-4b47-8df5-641b9aad98f0,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 769a90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721762ac4017a84454febe3fd71ab6671be9e230d7b785627b43bdafe8478d56,PodSandboxId:d0c4b0845fee9c8c5a409d1d96017b0e56e37a4fb5f685b4e69bc4626c12ffd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259743321120007,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9jrnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453ed97-efb4-41c1-8bfb-e7e004e618e0,},Annotations:map[string]string{io.kubernet
es.container.hash: 72f497a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81eca3ce5b15d81536c74dc285118c00ec013710df992caad1867b7c5e7f75a1,PodSandboxId:32a1e9c01260e16fc517c9caebdd1556fbbfcaabc5ff83ba679e2ce763d3ee50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722259743268949875,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-7db6d8ff4d-gcf7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196981ba-ed16-427c-ae8b-9b7e8ff36be2,},Annotations:map[string]string{io.kubernetes.container.hash: 4a624f81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcba14c355c5fc17ddee3394c79f5ffaea681079c29edd33b122d7aa80c36f1,PodSandboxId:6b9961791d750fd9d9a7d40cf02ff0c0f6e938c724b2b0787ebeb23a431b9beb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722259731581881315,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9phpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e9c45f-5176-492e-90c7-49b0201afe1e,},Annotations:map[string]string{io.kubernetes.container.hash: 867b7308,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc357136c66b6120efe2eee1197a8d0dabec7279ff50cf8ddea25182b0d4ae8,PodSandboxId:60e3f945e9e89cf01f1a900e939ca51214ea0d79a2a69da731b49606960a6d05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722259728097871714,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6kkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be20af3-1e62-4e2c-bb0c-26ab4cf0eed1,},Annotations:map[string]string{io.kubernetes.container.hash: 32c1dc3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e80af660361f5449de6286725b48cc816a32581672735b35e4ac2c55495983d1,PodSandboxId:a33809de7d3a6efb269fca0ca670a49eb3a11c9845507c3110c8509574ae03e0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722
eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722259707030052585,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2438b1d75fb1de3aa096517b67661add,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7606e1f107d6cded50cb09c9c101a2cac785cbcf697b2ffdecb599d2e148de2a,PodSandboxId:3c847132555037fab395549f498d9f9aad2f651da1470981906bc62a560c615c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd
477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722259706969359945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-104111,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cb06508783f1cdddfbd3cd4c58d73c,},Annotations:map[string]string{io.kubernetes.container.hash: 9edecd9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=385f3ea0-f54c-43e3-8bd8-4ae779791385 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	145852da70550       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   f2717ec1ae798       storage-provisioner
	d170d569040e7       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   2                   8e5557e2793d9       kube-controller-manager-ha-104111
	e2a3169f4bbf4       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      5 minutes ago       Running             kube-apiserver            3                   0903a4fd4a46c       kube-apiserver-ha-104111
	ae091ed0f6e94       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   f2717ec1ae798       storage-provisioner
	05bd816fab2d6       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   ab05b140cff12       busybox-fc5497c4f-7xsjn
	a1ae26ae158cd       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   e0a4ba4b9a439       kube-vip-ha-104111
	e73307e139630       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      5 minutes ago       Running             kube-proxy                1                   f939461f859dd       kube-proxy-n6kkf
	b9f80c23aaa9e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   9d24fc0c1911e       coredns-7db6d8ff4d-gcf7q
	ba3613e050212       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   fd64d3f03154a       etcd-ha-104111
	af5d35941c2b5       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      5 minutes ago       Running             kindnet-cni               1                   d67531e4a0428       kindnet-9phpm
	d14f317d46e88       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago       Exited              kube-controller-manager   1                   8e5557e2793d9       kube-controller-manager-ha-104111
	56ef5416acf68       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      5 minutes ago       Exited              kube-apiserver            2                   0903a4fd4a46c       kube-apiserver-ha-104111
	6682bba661cd3       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      5 minutes ago       Running             kube-scheduler            1                   fe3b7fb7cf557       kube-scheduler-ha-104111
	1aae34217228b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   fb6786851a9e9       coredns-7db6d8ff4d-9jrnl
	d2a033e8feb22       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   ab591310b6636       busybox-fc5497c4f-7xsjn
	721762ac4017a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   d0c4b0845fee9       coredns-7db6d8ff4d-9jrnl
	81eca3ce5b15d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   32a1e9c01260e       coredns-7db6d8ff4d-gcf7q
	8fcba14c355c5       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    16 minutes ago      Exited              kindnet-cni               0                   6b9961791d750       kindnet-9phpm
	6bc357136c66b       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      16 minutes ago      Exited              kube-proxy                0                   60e3f945e9e89       kube-proxy-n6kkf
	e80af660361f5       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      16 minutes ago      Exited              kube-scheduler            0                   a33809de7d3a6       kube-scheduler-ha-104111
	7606e1f107d6c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   3c84713255503       etcd-ha-104111
	
	
	==> coredns [1aae34217228bbdcb590cfc9744c1b1b7f8689f1ba0cf1c5323d27762bf4d209] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[2122936823]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 13:39:42.126) (total time: 10001ms):
	Trace[2122936823]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:39:52.127)
	Trace[2122936823]: [10.001302705s] [10.001302705s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:56460->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:56460->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [721762ac4017a84454febe3fd71ab6671be9e230d7b785627b43bdafe8478d56] <==
	[INFO] 10.244.2.2:35659 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00147984s
	[INFO] 10.244.2.2:53135 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000226257s
	[INFO] 10.244.2.2:49731 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189094s
	[INFO] 10.244.2.2:47456 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130859s
	[INFO] 10.244.2.2:41111 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123604s
	[INFO] 10.244.1.2:55083 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114636s
	[INFO] 10.244.1.2:48422 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00109487s
	[INFO] 10.244.0.4:39213 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116126s
	[INFO] 10.244.0.4:33260 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068728s
	[INFO] 10.244.2.2:48083 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166018s
	[INFO] 10.244.2.2:58646 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172185s
	[INFO] 10.244.2.2:35393 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009321s
	[INFO] 10.244.1.2:57222 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116426s
	[INFO] 10.244.0.4:60530 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165705s
	[INFO] 10.244.0.4:35848 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000187393s
	[INFO] 10.244.0.4:34740 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000104846s
	[INFO] 10.244.2.2:55008 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000235338s
	[INFO] 10.244.2.2:47084 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000152504s
	[INFO] 10.244.2.2:39329 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000115623s
	[INFO] 10.244.1.2:57485 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001155s
	[INFO] 10.244.1.2:42349 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000100298s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1878&timeout=8m49s&timeoutSeconds=529&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1878&timeout=7m54s&timeoutSeconds=474&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	
	
	==> coredns [81eca3ce5b15d81536c74dc285118c00ec013710df992caad1867b7c5e7f75a1] <==
	[INFO] 10.244.0.4:59749 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102743s
	[INFO] 10.244.0.4:46792 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124933s
	[INFO] 10.244.2.2:34901 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000159776s
	[INFO] 10.244.2.2:53333 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001076187s
	[INFO] 10.244.1.2:57672 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002003185s
	[INFO] 10.244.1.2:53227 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161629s
	[INFO] 10.244.1.2:38444 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092353s
	[INFO] 10.244.1.2:56499 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000211011s
	[INFO] 10.244.1.2:57556 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068457s
	[INFO] 10.244.1.2:34023 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109815s
	[INFO] 10.244.0.4:40329 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111231s
	[INFO] 10.244.0.4:38637 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005437s
	[INFO] 10.244.2.2:36810 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104372s
	[INFO] 10.244.1.2:53024 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122232s
	[INFO] 10.244.1.2:40257 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148245s
	[INFO] 10.244.1.2:41500 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080394s
	[INFO] 10.244.0.4:48915 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000151276s
	[INFO] 10.244.2.2:60231 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001284s
	[INFO] 10.244.1.2:33829 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154134s
	[INFO] 10.244.1.2:57945 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123045s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1833&timeout=5m39s&timeoutSeconds=339&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1875&timeout=9m18s&timeoutSeconds=558&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1878&timeout=8m21s&timeoutSeconds=501&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [b9f80c23aaa9ea40aa6a16faaf7c226865927c3b2060d3c13580ec1ea78239c8] <==
	Trace[2041965860]: [10.002316691s] [10.002316691s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1718781440]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 13:39:38.327) (total time: 10000ms):
	Trace[1718781440]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (13:39:48.328)
	Trace[1718781440]: [10.000982314s] [10.000982314s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-104111
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-104111
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=ha-104111
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T13_28_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:28:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-104111
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:45:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:40:10 +0000   Mon, 29 Jul 2024 13:28:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:40:10 +0000   Mon, 29 Jul 2024 13:28:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:40:10 +0000   Mon, 29 Jul 2024 13:28:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:40:10 +0000   Mon, 29 Jul 2024 13:29:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.120
	  Hostname:    ha-104111
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 613eb8d959344be3989ec50055edd8a7
	  System UUID:                613eb8d9-5934-4be3-989e-c50055edd8a7
	  Boot ID:                    5cf31ff2-8a2f-47f5-8440-f13293b7049d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7xsjn              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-9jrnl             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-gcf7q             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-104111                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-9phpm                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-104111             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-104111    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-n6kkf                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-104111             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-104111                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4m51s              kube-proxy       
	  Normal   Starting                 16m                kube-proxy       
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node ha-104111 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node ha-104111 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node ha-104111 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     16m                kubelet          Node ha-104111 status is now: NodeHasSufficientPID
	  Normal   Starting                 16m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m                kubelet          Node ha-104111 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m                kubelet          Node ha-104111 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           16m                node-controller  Node ha-104111 event: Registered Node ha-104111 in Controller
	  Normal   NodeReady                16m                kubelet          Node ha-104111 status is now: NodeReady
	  Normal   RegisteredNode           15m                node-controller  Node ha-104111 event: Registered Node ha-104111 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-104111 event: Registered Node ha-104111 in Controller
	  Warning  ContainerGCFailed        6m36s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m46s              node-controller  Node ha-104111 event: Registered Node ha-104111 in Controller
	  Normal   RegisteredNode           4m41s              node-controller  Node ha-104111 event: Registered Node ha-104111 in Controller
	  Normal   RegisteredNode           3m8s               node-controller  Node ha-104111 event: Registered Node ha-104111 in Controller
	
	
	Name:               ha-104111-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-104111-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=ha-104111
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T13_29_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:29:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-104111-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:45:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 13:43:23 +0000   Mon, 29 Jul 2024 13:43:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 13:43:23 +0000   Mon, 29 Jul 2024 13:43:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 13:43:23 +0000   Mon, 29 Jul 2024 13:43:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 13:43:23 +0000   Mon, 29 Jul 2024 13:43:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.140
	  Hostname:    ha-104111-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0636dc68c5464326baedc11fd97131b2
	  System UUID:                0636dc68-c546-4326-baed-c11fd97131b2
	  Boot ID:                    6b6d38a1-f64b-49d6-b7a7-65634eab971a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-sf8mb                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-104111-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-njndz                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-104111-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-104111-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-5dnvv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-104111-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-104111-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m52s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-104111-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-104111-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-104111-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           15m                    node-controller  Node ha-104111-m02 event: Registered Node ha-104111-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-104111-m02 event: Registered Node ha-104111-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-104111-m02 event: Registered Node ha-104111-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-104111-m02 status is now: NodeNotReady
	  Normal  Starting                 5m22s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m21s (x8 over 5m22s)  kubelet          Node ha-104111-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m21s (x8 over 5m22s)  kubelet          Node ha-104111-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m21s (x7 over 5m22s)  kubelet          Node ha-104111-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m47s                  node-controller  Node ha-104111-m02 event: Registered Node ha-104111-m02 in Controller
	  Normal  RegisteredNode           4m41s                  node-controller  Node ha-104111-m02 event: Registered Node ha-104111-m02 in Controller
	  Normal  RegisteredNode           3m8s                   node-controller  Node ha-104111-m02 event: Registered Node ha-104111-m02 in Controller
	  Normal  NodeNotReady             111s                   node-controller  Node ha-104111-m02 status is now: NodeNotReady
	
	
	Name:               ha-104111-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-104111-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=ha-104111
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T13_31_48_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:31:47 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-104111-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 13:42:41 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 13:42:21 +0000   Mon, 29 Jul 2024 13:43:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 13:42:21 +0000   Mon, 29 Jul 2024 13:43:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 13:42:21 +0000   Mon, 29 Jul 2024 13:43:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 13:42:21 +0000   Mon, 29 Jul 2024 13:43:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.40
	  Hostname:    ha-104111-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f6a0723aec74e89b187376957d3127c
	  System UUID:                4f6a0723-aec7-4e89-b187-376957d3127c
	  Boot ID:                    d938a38c-b54f-4872-be95-d0a289fd5060
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nhzkb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-fbnbc              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-cmtgm           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-104111-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-104111-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-104111-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-104111-m04 event: Registered Node ha-104111-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-104111-m04 event: Registered Node ha-104111-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-104111-m04 event: Registered Node ha-104111-m04 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-104111-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m46s                  node-controller  Node ha-104111-m04 event: Registered Node ha-104111-m04 in Controller
	  Normal   RegisteredNode           4m41s                  node-controller  Node ha-104111-m04 event: Registered Node ha-104111-m04 in Controller
	  Normal   RegisteredNode           3m8s                   node-controller  Node ha-104111-m04 event: Registered Node ha-104111-m04 in Controller
	  Warning  Rebooted                 2m48s (x2 over 2m48s)  kubelet          Node ha-104111-m04 has been rebooted, boot id: d938a38c-b54f-4872-be95-d0a289fd5060
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m48s (x3 over 2m48s)  kubelet          Node ha-104111-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x3 over 2m48s)  kubelet          Node ha-104111-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x3 over 2m48s)  kubelet          Node ha-104111-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             2m48s                  kubelet          Node ha-104111-m04 status is now: NodeNotReady
	  Normal   NodeReady                2m48s                  kubelet          Node ha-104111-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s (x2 over 4m6s)    node-controller  Node ha-104111-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[ +10.840145] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.057324] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056430] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.166263] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.131459] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.268832] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.161114] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +3.971481] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.058723] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.299885] systemd-fstab-generator[1354]: Ignoring "noauto" option for root device
	[  +0.095957] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.972930] kauditd_printk_skb: 21 callbacks suppressed
	[Jul29 13:29] kauditd_printk_skb: 38 callbacks suppressed
	[ +36.765451] kauditd_printk_skb: 24 callbacks suppressed
	[Jul29 13:39] systemd-fstab-generator[3636]: Ignoring "noauto" option for root device
	[  +0.158398] systemd-fstab-generator[3648]: Ignoring "noauto" option for root device
	[  +0.175912] systemd-fstab-generator[3662]: Ignoring "noauto" option for root device
	[  +0.143053] systemd-fstab-generator[3674]: Ignoring "noauto" option for root device
	[  +0.282599] systemd-fstab-generator[3702]: Ignoring "noauto" option for root device
	[  +7.025238] systemd-fstab-generator[3806]: Ignoring "noauto" option for root device
	[  +0.093194] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.891546] kauditd_printk_skb: 22 callbacks suppressed
	[ +12.089559] kauditd_printk_skb: 75 callbacks suppressed
	[ +10.058994] kauditd_printk_skb: 1 callbacks suppressed
	[Jul29 13:40] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [7606e1f107d6cded50cb09c9c101a2cac785cbcf697b2ffdecb599d2e148de2a] <==
	2024/07/29 13:37:45 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-29T13:37:45.539706Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"678.363436ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-07-29T13:37:45.539724Z","caller":"traceutil/trace.go:171","msg":"trace[61922858] range","detail":"{range_begin:/registry/jobs/; range_end:/registry/jobs0; }","duration":"678.397244ms","start":"2024-07-29T13:37:44.861322Z","end":"2024-07-29T13:37:45.53972Z","steps":["trace[61922858] 'agreement among raft nodes before linearized reading'  (duration: 678.373825ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:37:45.539739Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T13:37:44.861316Z","time spent":"678.419577ms","remote":"127.0.0.1:45912","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":0,"response size":0,"request content":"key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:10000 "}
	2024/07/29 13:37:45 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-29T13:37:45.608466Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.120:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T13:37:45.608634Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.120:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T13:37:45.608768Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"af2c917f7a70ddd0","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-29T13:37:45.608951Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"27be60374a17232"}
	{"level":"info","ts":"2024-07-29T13:37:45.608997Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"27be60374a17232"}
	{"level":"info","ts":"2024-07-29T13:37:45.609024Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"27be60374a17232"}
	{"level":"info","ts":"2024-07-29T13:37:45.609063Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232"}
	{"level":"info","ts":"2024-07-29T13:37:45.609126Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232"}
	{"level":"info","ts":"2024-07-29T13:37:45.609179Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"af2c917f7a70ddd0","remote-peer-id":"27be60374a17232"}
	{"level":"info","ts":"2024-07-29T13:37:45.60921Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"27be60374a17232"}
	{"level":"info","ts":"2024-07-29T13:37:45.609218Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:37:45.609226Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:37:45.609244Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:37:45.609323Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"af2c917f7a70ddd0","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:37:45.609349Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"af2c917f7a70ddd0","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:37:45.609391Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"af2c917f7a70ddd0","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:37:45.609419Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:37:45.612771Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.120:2380"}
	{"level":"info","ts":"2024-07-29T13:37:45.612904Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.120:2380"}
	{"level":"info","ts":"2024-07-29T13:37:45.61294Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-104111","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.120:2380"],"advertise-client-urls":["https://192.168.39.120:2379"]}
	
	
	==> etcd [ba3613e0502120e9000731192ebe7375f108449671e6f56de82157ff90cf4f29] <==
	{"level":"warn","ts":"2024-07-29T13:41:42.10503Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9e64f0bb14d4f4a0","rtt":"0s","error":"dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-29T13:41:43.83702Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:41:43.838805Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"af2c917f7a70ddd0","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:41:43.838916Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"af2c917f7a70ddd0","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:41:43.848684Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"af2c917f7a70ddd0","to":"9e64f0bb14d4f4a0","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-29T13:41:43.848823Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"af2c917f7a70ddd0","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:41:43.84924Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"af2c917f7a70ddd0","to":"9e64f0bb14d4f4a0","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-29T13:41:43.849304Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"af2c917f7a70ddd0","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:42:35.086173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af2c917f7a70ddd0 switched to configuration voters=(178989512727294514 12622623832313748944)"}
	{"level":"info","ts":"2024-07-29T13:42:35.088268Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"f3de5e1602edc73b","local-member-id":"af2c917f7a70ddd0","removed-remote-peer-id":"9e64f0bb14d4f4a0","removed-remote-peer-urls":["https://192.168.39.202:2380"]}
	{"level":"info","ts":"2024-07-29T13:42:35.088399Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"warn","ts":"2024-07-29T13:42:35.088734Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:42:35.088794Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"warn","ts":"2024-07-29T13:42:35.090219Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:42:35.090301Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:42:35.090605Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"af2c917f7a70ddd0","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"warn","ts":"2024-07-29T13:42:35.090843Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"af2c917f7a70ddd0","remote-peer-id":"9e64f0bb14d4f4a0","error":"context canceled"}
	{"level":"warn","ts":"2024-07-29T13:42:35.090919Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"9e64f0bb14d4f4a0","error":"failed to read 9e64f0bb14d4f4a0 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-29T13:42:35.090972Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"af2c917f7a70ddd0","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"warn","ts":"2024-07-29T13:42:35.091171Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"af2c917f7a70ddd0","remote-peer-id":"9e64f0bb14d4f4a0","error":"context canceled"}
	{"level":"info","ts":"2024-07-29T13:42:35.091226Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"af2c917f7a70ddd0","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:42:35.091364Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:42:35.091405Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"af2c917f7a70ddd0","removed-remote-peer-id":"9e64f0bb14d4f4a0"}
	{"level":"info","ts":"2024-07-29T13:42:35.091475Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"af2c917f7a70ddd0","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"9e64f0bb14d4f4a0"}
	{"level":"warn","ts":"2024-07-29T13:42:35.106477Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.202:47676","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:45:09 up 17 min,  0 users,  load average: 0.48, 0.42, 0.36
	Linux ha-104111 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8fcba14c355c5fc17ddee3394c79f5ffaea681079c29edd33b122d7aa80c36f1] <==
	I0729 13:37:12.634225       1 main.go:322] Node ha-104111-m04 has CIDR [10.244.3.0/24] 
	I0729 13:37:22.627786       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0729 13:37:22.627901       1 main.go:299] handling current node
	I0729 13:37:22.627931       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 13:37:22.627949       1 main.go:322] Node ha-104111-m02 has CIDR [10.244.1.0/24] 
	I0729 13:37:22.628134       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0729 13:37:22.628157       1 main.go:322] Node ha-104111-m03 has CIDR [10.244.2.0/24] 
	I0729 13:37:22.628222       1 main.go:295] Handling node with IPs: map[192.168.39.40:{}]
	I0729 13:37:22.628241       1 main.go:322] Node ha-104111-m04 has CIDR [10.244.3.0/24] 
	I0729 13:37:32.633218       1 main.go:295] Handling node with IPs: map[192.168.39.40:{}]
	I0729 13:37:32.633263       1 main.go:322] Node ha-104111-m04 has CIDR [10.244.3.0/24] 
	I0729 13:37:32.633476       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0729 13:37:32.633518       1 main.go:299] handling current node
	I0729 13:37:32.633541       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 13:37:32.633642       1 main.go:322] Node ha-104111-m02 has CIDR [10.244.1.0/24] 
	I0729 13:37:32.633743       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0729 13:37:32.633765       1 main.go:322] Node ha-104111-m03 has CIDR [10.244.2.0/24] 
	I0729 13:37:42.627963       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0729 13:37:42.628055       1 main.go:322] Node ha-104111-m03 has CIDR [10.244.2.0/24] 
	I0729 13:37:42.628233       1 main.go:295] Handling node with IPs: map[192.168.39.40:{}]
	I0729 13:37:42.628258       1 main.go:322] Node ha-104111-m04 has CIDR [10.244.3.0/24] 
	I0729 13:37:42.628327       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0729 13:37:42.628347       1 main.go:299] handling current node
	I0729 13:37:42.628369       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 13:37:42.628390       1 main.go:322] Node ha-104111-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [af5d35941c2b55304add09b92346c5ee66dae70db5d631b55a4c9e748a78cc1c] <==
	I0729 13:44:22.357141       1 main.go:322] Node ha-104111-m04 has CIDR [10.244.3.0/24] 
	I0729 13:44:32.349177       1 main.go:295] Handling node with IPs: map[192.168.39.40:{}]
	I0729 13:44:32.349222       1 main.go:322] Node ha-104111-m04 has CIDR [10.244.3.0/24] 
	I0729 13:44:32.349374       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0729 13:44:32.349400       1 main.go:299] handling current node
	I0729 13:44:32.349418       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 13:44:32.349423       1 main.go:322] Node ha-104111-m02 has CIDR [10.244.1.0/24] 
	I0729 13:44:42.357769       1 main.go:295] Handling node with IPs: map[192.168.39.40:{}]
	I0729 13:44:42.357861       1 main.go:322] Node ha-104111-m04 has CIDR [10.244.3.0/24] 
	I0729 13:44:42.358004       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0729 13:44:42.358027       1 main.go:299] handling current node
	I0729 13:44:42.358049       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 13:44:42.358083       1 main.go:322] Node ha-104111-m02 has CIDR [10.244.1.0/24] 
	I0729 13:44:52.349066       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0729 13:44:52.350238       1 main.go:299] handling current node
	I0729 13:44:52.350333       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 13:44:52.350363       1 main.go:322] Node ha-104111-m02 has CIDR [10.244.1.0/24] 
	I0729 13:44:52.350730       1 main.go:295] Handling node with IPs: map[192.168.39.40:{}]
	I0729 13:44:52.350764       1 main.go:322] Node ha-104111-m04 has CIDR [10.244.3.0/24] 
	I0729 13:45:02.356863       1 main.go:295] Handling node with IPs: map[192.168.39.40:{}]
	I0729 13:45:02.357063       1 main.go:322] Node ha-104111-m04 has CIDR [10.244.3.0/24] 
	I0729 13:45:02.357312       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0729 13:45:02.357342       1 main.go:299] handling current node
	I0729 13:45:02.357383       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 13:45:02.357400       1 main.go:322] Node ha-104111-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [56ef5416acf6856c647ca6214560e4d2bfefeb7e1378e60628b77e748be400ac] <==
	I0729 13:39:31.661209       1 options.go:221] external host was not specified, using 192.168.39.120
	I0729 13:39:31.666061       1 server.go:148] Version: v1.30.3
	I0729 13:39:31.666120       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:39:32.388673       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 13:39:32.403344       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 13:39:32.412927       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 13:39:32.412991       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 13:39:32.413182       1 instance.go:299] Using reconciler: lease
	W0729 13:39:52.383090       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0729 13:39:52.383496       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0729 13:39:52.414501       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [e2a3169f4bbf4da4cbecfa9b2982c6bf76dcc3b02c756aafe16fcbd013e5ecbc] <==
	I0729 13:40:10.545203       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0729 13:40:10.545237       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0729 13:40:10.623521       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 13:40:10.623535       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 13:40:10.623998       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 13:40:10.625193       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 13:40:10.626166       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 13:40:10.633944       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 13:40:10.634430       1 shared_informer.go:320] Caches are synced for configmaps
	W0729 13:40:10.642175       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.202]
	I0729 13:40:10.645945       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 13:40:10.645990       1 aggregator.go:165] initial CRD sync complete...
	I0729 13:40:10.646002       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 13:40:10.646007       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 13:40:10.646012       1 cache.go:39] Caches are synced for autoregister controller
	I0729 13:40:10.680084       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 13:40:10.683449       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 13:40:10.683486       1 policy_source.go:224] refreshing policies
	I0729 13:40:10.718829       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 13:40:10.743648       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 13:40:10.757684       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0729 13:40:10.767201       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0729 13:40:11.533181       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0729 13:40:11.996021       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.120 192.168.39.140 192.168.39.202]
	W0729 13:40:21.996038       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.120 192.168.39.140]
	
	
	==> kube-controller-manager [d14f317d46e88a1b1f268e7cd9b0473d77ffe2525b92da2cf156d1ecd2c489c4] <==
	I0729 13:39:32.804114       1 serving.go:380] Generated self-signed cert in-memory
	I0729 13:39:33.323118       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 13:39:33.323160       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:39:33.325115       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 13:39:33.325274       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 13:39:33.325784       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 13:39:33.325865       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0729 13:39:53.420626       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.120:8443/healthz\": dial tcp 192.168.39.120:8443: connect: connection refused"
	
	
	==> kube-controller-manager [d170d569040e71ebba12d18aa4488079f9988b7cd21fa4b35b0e965b490e63f4] <==
	I0729 13:43:18.434692       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.711423ms"
	I0729 13:43:18.434793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.871µs"
	I0729 13:43:23.405494       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.506257ms"
	I0729 13:43:23.405783       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.915µs"
	I0729 13:43:28.153434       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.505665ms"
	I0729 13:43:28.154244       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.993µs"
	E0729 13:43:28.220019       1 gc_controller.go:153] "Failed to get node" err="node \"ha-104111-m03\" not found" logger="pod-garbage-collector-controller" node="ha-104111-m03"
	E0729 13:43:28.220042       1 gc_controller.go:153] "Failed to get node" err="node \"ha-104111-m03\" not found" logger="pod-garbage-collector-controller" node="ha-104111-m03"
	E0729 13:43:28.220050       1 gc_controller.go:153] "Failed to get node" err="node \"ha-104111-m03\" not found" logger="pod-garbage-collector-controller" node="ha-104111-m03"
	E0729 13:43:28.220060       1 gc_controller.go:153] "Failed to get node" err="node \"ha-104111-m03\" not found" logger="pod-garbage-collector-controller" node="ha-104111-m03"
	E0729 13:43:28.220066       1 gc_controller.go:153] "Failed to get node" err="node \"ha-104111-m03\" not found" logger="pod-garbage-collector-controller" node="ha-104111-m03"
	I0729 13:43:28.233199       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-mt9dk"
	I0729 13:43:28.264672       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-mt9dk"
	I0729 13:43:28.264718       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-104111-m03"
	I0729 13:43:28.297330       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-104111-m03"
	I0729 13:43:28.297432       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-104111-m03"
	I0729 13:43:28.331269       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-104111-m03"
	I0729 13:43:28.331402       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-104111-m03"
	I0729 13:43:28.357913       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-104111-m03"
	I0729 13:43:28.358065       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-m765x"
	I0729 13:43:28.391646       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-m765x"
	I0729 13:43:28.391791       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-104111-m03"
	I0729 13:43:28.421850       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-104111-m03"
	I0729 13:43:28.421967       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-104111-m03"
	I0729 13:43:28.450142       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-104111-m03"
	
	
	==> kube-proxy [6bc357136c66b6120efe2eee1197a8d0dabec7279ff50cf8ddea25182b0d4ae8] <==
	E0729 13:36:42.071518       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-104111&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 13:36:42.071769       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 13:36:42.071874       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 13:36:42.071750       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 13:36:42.071953       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 13:36:48.214410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 13:36:48.214504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 13:36:48.214762       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 13:36:48.214811       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 13:36:48.214874       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-104111&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 13:36:48.214920       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-104111&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 13:37:00.502492       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-104111&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 13:37:00.502492       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 13:37:00.503267       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-104111&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 13:37:00.503298       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 13:37:00.502618       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 13:37:00.503352       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 13:37:15.862231       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 13:37:15.862374       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 13:37:22.005975       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 13:37:22.006088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 13:37:28.150131       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-104111&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 13:37:28.150185       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-104111&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 13:37:43.509039       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 13:37:43.509092       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [e73307e1396301309c14bd627fe5c2d872ed0db38a27877dbd101abca636145e] <==
	I0729 13:39:32.726752       1 server_linux.go:69] "Using iptables proxy"
	E0729 13:39:34.101042       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-104111\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 13:39:37.173075       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-104111\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 13:39:40.245961       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-104111\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 13:39:46.389075       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-104111\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 13:39:58.677221       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-104111\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0729 13:40:17.538492       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.120"]
	I0729 13:40:17.592178       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 13:40:17.592261       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 13:40:17.592285       1 server_linux.go:165] "Using iptables Proxier"
	I0729 13:40:17.596694       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 13:40:17.597135       1 server.go:872] "Version info" version="v1.30.3"
	I0729 13:40:17.597240       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:40:17.600397       1 config.go:192] "Starting service config controller"
	I0729 13:40:17.600459       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 13:40:17.600499       1 config.go:101] "Starting endpoint slice config controller"
	I0729 13:40:17.600526       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 13:40:17.604150       1 config.go:319] "Starting node config controller"
	I0729 13:40:17.604188       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 13:40:17.701435       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 13:40:17.701533       1 shared_informer.go:320] Caches are synced for service config
	I0729 13:40:17.704397       1 shared_informer.go:320] Caches are synced for node config
	W0729 13:43:03.391094       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0729 13:43:03.391094       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0729 13:43:03.391211       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-scheduler [6682bba661cd30852620bb8cae70c5cb86f9db46e582a6b7776a9f88229b529b] <==
	W0729 13:40:01.935232       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.120:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.120:8443: connect: connection refused
	E0729 13:40:01.935303       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.120:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.120:8443: connect: connection refused
	W0729 13:40:02.035358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.120:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.120:8443: connect: connection refused
	E0729 13:40:02.035539       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.120:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.120:8443: connect: connection refused
	W0729 13:40:02.629461       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.120:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.120:8443: connect: connection refused
	E0729 13:40:02.629518       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.120:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.120:8443: connect: connection refused
	W0729 13:40:02.678262       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.120:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.120:8443: connect: connection refused
	E0729 13:40:02.678345       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.120:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.120:8443: connect: connection refused
	W0729 13:40:02.961108       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.120:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.120:8443: connect: connection refused
	E0729 13:40:02.961275       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.120:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.120:8443: connect: connection refused
	W0729 13:40:03.837337       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.120:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.120:8443: connect: connection refused
	E0729 13:40:03.837502       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.120:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.120:8443: connect: connection refused
	W0729 13:40:10.554926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 13:40:10.554988       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 13:40:10.555220       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 13:40:10.555264       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 13:40:10.555309       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 13:40:10.555341       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 13:40:10.555385       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 13:40:10.555412       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 13:40:10.555452       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 13:40:10.555477       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 13:40:10.555521       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 13:40:10.557633       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0729 13:40:15.124646       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e80af660361f5449de6286725b48cc816a32581672735b35e4ac2c55495983d1] <==
	W0729 13:37:41.287834       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 13:37:41.287933       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 13:37:41.524725       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 13:37:41.524775       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 13:37:41.642765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 13:37:41.642815       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 13:37:41.729604       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 13:37:41.729699       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 13:37:44.270342       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 13:37:44.270395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 13:37:44.297862       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 13:37:44.297919       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 13:37:44.411217       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 13:37:44.411334       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 13:37:44.424377       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 13:37:44.424434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 13:37:44.518120       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 13:37:44.518176       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 13:37:44.627616       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 13:37:44.627676       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 13:37:44.841290       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 13:37:44.841320       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 13:37:44.934709       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 13:37:44.934776       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 13:37:45.516615       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 13:40:47 ha-104111 kubelet[1361]: E0729 13:40:47.240684    1361 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b61cc52e-771b-484a-99d6-8963665cb1e8)\"" pod="kube-system/storage-provisioner" podUID="b61cc52e-771b-484a-99d6-8963665cb1e8"
	Jul 29 13:40:48 ha-104111 kubelet[1361]: I0729 13:40:48.240665    1361 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-104111" podUID="edfeb506-2884-4406-92cf-c35fce56d7c4"
	Jul 29 13:40:48 ha-104111 kubelet[1361]: I0729 13:40:48.259175    1361 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-104111"
	Jul 29 13:40:58 ha-104111 kubelet[1361]: I0729 13:40:58.240154    1361 scope.go:117] "RemoveContainer" containerID="ae091ed0f6e9492fda87cc2caa224d81e5d4aac495f07c0b6a5c340ba7ed513f"
	Jul 29 13:40:58 ha-104111 kubelet[1361]: I0729 13:40:58.967435    1361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-104111" podStartSLOduration=10.967411321 podStartE2EDuration="10.967411321s" podCreationTimestamp="2024-07-29 13:40:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-29 13:40:53.25891921 +0000 UTC m=+740.164403770" watchObservedRunningTime="2024-07-29 13:40:58.967411321 +0000 UTC m=+745.872895880"
	Jul 29 13:41:33 ha-104111 kubelet[1361]: E0729 13:41:33.268948    1361 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:41:33 ha-104111 kubelet[1361]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:41:33 ha-104111 kubelet[1361]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:41:33 ha-104111 kubelet[1361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:41:33 ha-104111 kubelet[1361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:42:33 ha-104111 kubelet[1361]: E0729 13:42:33.265521    1361 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:42:33 ha-104111 kubelet[1361]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:42:33 ha-104111 kubelet[1361]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:42:33 ha-104111 kubelet[1361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:42:33 ha-104111 kubelet[1361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:43:33 ha-104111 kubelet[1361]: E0729 13:43:33.264420    1361 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:43:33 ha-104111 kubelet[1361]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:43:33 ha-104111 kubelet[1361]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:43:33 ha-104111 kubelet[1361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:43:33 ha-104111 kubelet[1361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 13:44:33 ha-104111 kubelet[1361]: E0729 13:44:33.264708    1361 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 13:44:33 ha-104111 kubelet[1361]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 13:44:33 ha-104111 kubelet[1361]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 13:44:33 ha-104111 kubelet[1361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 13:44:33 ha-104111 kubelet[1361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 13:45:08.376607 1001476 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19338-974764/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-104111 -n ha-104111
helpers_test.go:261: (dbg) Run:  kubectl --context ha-104111 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (320.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-999945
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-999945
E0729 14:02:06.667358  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-999945: exit status 82 (2m1.755894433s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-999945-m03"  ...
	* Stopping node "multinode-999945-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-999945" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-999945 --wait=true -v=8 --alsologtostderr
E0729 14:04:30.664721  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
E0729 14:05:09.717973  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-999945 --wait=true -v=8 --alsologtostderr: (3m16.416063926s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-999945
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-999945 -n multinode-999945
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-999945 logs -n 25: (1.50516271s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-999945 ssh -n                                                                 | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | multinode-999945-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-999945 cp multinode-999945-m02:/home/docker/cp-test.txt                       | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2451309451/001/cp-test_multinode-999945-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-999945 ssh -n                                                                 | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | multinode-999945-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-999945 cp multinode-999945-m02:/home/docker/cp-test.txt                       | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | multinode-999945:/home/docker/cp-test_multinode-999945-m02_multinode-999945.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-999945 ssh -n                                                                 | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | multinode-999945-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-999945 ssh -n multinode-999945 sudo cat                                       | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | /home/docker/cp-test_multinode-999945-m02_multinode-999945.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-999945 cp multinode-999945-m02:/home/docker/cp-test.txt                       | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | multinode-999945-m03:/home/docker/cp-test_multinode-999945-m02_multinode-999945-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-999945 ssh -n                                                                 | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | multinode-999945-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-999945 ssh -n multinode-999945-m03 sudo cat                                   | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | /home/docker/cp-test_multinode-999945-m02_multinode-999945-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-999945 cp testdata/cp-test.txt                                                | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | multinode-999945-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-999945 ssh -n                                                                 | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | multinode-999945-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-999945 cp multinode-999945-m03:/home/docker/cp-test.txt                       | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2451309451/001/cp-test_multinode-999945-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-999945 ssh -n                                                                 | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | multinode-999945-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-999945 cp multinode-999945-m03:/home/docker/cp-test.txt                       | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | multinode-999945:/home/docker/cp-test_multinode-999945-m03_multinode-999945.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-999945 ssh -n                                                                 | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | multinode-999945-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-999945 ssh -n multinode-999945 sudo cat                                       | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | /home/docker/cp-test_multinode-999945-m03_multinode-999945.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-999945 cp multinode-999945-m03:/home/docker/cp-test.txt                       | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | multinode-999945-m02:/home/docker/cp-test_multinode-999945-m03_multinode-999945-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-999945 ssh -n                                                                 | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | multinode-999945-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-999945 ssh -n multinode-999945-m02 sudo cat                                   | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | /home/docker/cp-test_multinode-999945-m03_multinode-999945-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-999945 node stop m03                                                          | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	| node    | multinode-999945 node start                                                             | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 14:00 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-999945                                                                | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 14:00 UTC |                     |
	| stop    | -p multinode-999945                                                                     | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 14:00 UTC |                     |
	| start   | -p multinode-999945                                                                     | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 14:02 UTC | 29 Jul 24 14:05 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-999945                                                                | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 14:05 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 14:02:26
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 14:02:26.264359 1010714 out.go:291] Setting OutFile to fd 1 ...
	I0729 14:02:26.264503 1010714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:02:26.264514 1010714 out.go:304] Setting ErrFile to fd 2...
	I0729 14:02:26.264518 1010714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:02:26.264751 1010714 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 14:02:26.265372 1010714 out.go:298] Setting JSON to false
	I0729 14:02:26.266414 1010714 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13498,"bootTime":1722248248,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 14:02:26.266487 1010714 start.go:139] virtualization: kvm guest
	I0729 14:02:26.268688 1010714 out.go:177] * [multinode-999945] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 14:02:26.270337 1010714 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 14:02:26.270338 1010714 notify.go:220] Checking for updates...
	I0729 14:02:26.271895 1010714 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 14:02:26.273236 1010714 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:02:26.274472 1010714 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:02:26.275760 1010714 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 14:02:26.276959 1010714 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 14:02:26.278522 1010714 config.go:182] Loaded profile config "multinode-999945": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:02:26.278640 1010714 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 14:02:26.279125 1010714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:02:26.279222 1010714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:02:26.295397 1010714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35895
	I0729 14:02:26.295860 1010714 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:02:26.296629 1010714 main.go:141] libmachine: Using API Version  1
	I0729 14:02:26.296650 1010714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:02:26.297057 1010714 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:02:26.297221 1010714 main.go:141] libmachine: (multinode-999945) Calling .DriverName
	I0729 14:02:26.333206 1010714 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 14:02:26.334473 1010714 start.go:297] selected driver: kvm2
	I0729 14:02:26.334497 1010714 start.go:901] validating driver "kvm2" against &{Name:multinode-999945 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-999945 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.130 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.113 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:02:26.334712 1010714 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 14:02:26.335030 1010714 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:02:26.335100 1010714 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19338-974764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 14:02:26.350303 1010714 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 14:02:26.351006 1010714 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:02:26.351070 1010714 cni.go:84] Creating CNI manager for ""
	I0729 14:02:26.351082 1010714 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 14:02:26.351149 1010714 start.go:340] cluster config:
	{Name:multinode-999945 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-999945 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.130 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.113 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:02:26.351290 1010714 iso.go:125] acquiring lock: {Name:mk2bc72146110e230952d77b90cad2ea8182c9d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:02:26.353214 1010714 out.go:177] * Starting "multinode-999945" primary control-plane node in "multinode-999945" cluster
	I0729 14:02:26.354463 1010714 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 14:02:26.354509 1010714 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 14:02:26.354520 1010714 cache.go:56] Caching tarball of preloaded images
	I0729 14:02:26.354592 1010714 preload.go:172] Found /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 14:02:26.354602 1010714 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 14:02:26.354734 1010714 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/multinode-999945/config.json ...
	I0729 14:02:26.354931 1010714 start.go:360] acquireMachinesLock for multinode-999945: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 14:02:26.354973 1010714 start.go:364] duration metric: took 24.035µs to acquireMachinesLock for "multinode-999945"
	I0729 14:02:26.354987 1010714 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:02:26.355002 1010714 fix.go:54] fixHost starting: 
	I0729 14:02:26.355314 1010714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:02:26.355351 1010714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:02:26.369556 1010714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40955
	I0729 14:02:26.369960 1010714 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:02:26.370437 1010714 main.go:141] libmachine: Using API Version  1
	I0729 14:02:26.370471 1010714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:02:26.370796 1010714 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:02:26.370988 1010714 main.go:141] libmachine: (multinode-999945) Calling .DriverName
	I0729 14:02:26.371131 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetState
	I0729 14:02:26.372636 1010714 fix.go:112] recreateIfNeeded on multinode-999945: state=Running err=<nil>
	W0729 14:02:26.372657 1010714 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:02:26.375136 1010714 out.go:177] * Updating the running kvm2 "multinode-999945" VM ...
	I0729 14:02:26.376479 1010714 machine.go:94] provisionDockerMachine start ...
	I0729 14:02:26.376500 1010714 main.go:141] libmachine: (multinode-999945) Calling .DriverName
	I0729 14:02:26.376711 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHHostname
	I0729 14:02:26.379242 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:26.379610 1010714 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 14:02:26.379630 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:26.379794 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHPort
	I0729 14:02:26.379974 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:02:26.380099 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:02:26.380225 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHUsername
	I0729 14:02:26.380394 1010714 main.go:141] libmachine: Using SSH client type: native
	I0729 14:02:26.380684 1010714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 14:02:26.380700 1010714 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:02:26.498625 1010714 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-999945
	
	I0729 14:02:26.498661 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetMachineName
	I0729 14:02:26.498930 1010714 buildroot.go:166] provisioning hostname "multinode-999945"
	I0729 14:02:26.498957 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetMachineName
	I0729 14:02:26.499174 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHHostname
	I0729 14:02:26.502085 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:26.502508 1010714 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 14:02:26.502530 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:26.502670 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHPort
	I0729 14:02:26.502869 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:02:26.503019 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:02:26.503151 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHUsername
	I0729 14:02:26.503294 1010714 main.go:141] libmachine: Using SSH client type: native
	I0729 14:02:26.503478 1010714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 14:02:26.503490 1010714 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-999945 && echo "multinode-999945" | sudo tee /etc/hostname
	I0729 14:02:26.636620 1010714 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-999945
	
	I0729 14:02:26.636659 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHHostname
	I0729 14:02:26.639373 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:26.639721 1010714 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 14:02:26.639758 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:26.639948 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHPort
	I0729 14:02:26.640154 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:02:26.640343 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:02:26.640486 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHUsername
	I0729 14:02:26.640708 1010714 main.go:141] libmachine: Using SSH client type: native
	I0729 14:02:26.640910 1010714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 14:02:26.640928 1010714 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-999945' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-999945/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-999945' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:02:26.753868 1010714 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:02:26.753898 1010714 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:02:26.753923 1010714 buildroot.go:174] setting up certificates
	I0729 14:02:26.753937 1010714 provision.go:84] configureAuth start
	I0729 14:02:26.753963 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetMachineName
	I0729 14:02:26.754244 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetIP
	I0729 14:02:26.756944 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:26.757283 1010714 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 14:02:26.757318 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:26.757504 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHHostname
	I0729 14:02:26.759793 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:26.760174 1010714 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 14:02:26.760218 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:26.760365 1010714 provision.go:143] copyHostCerts
	I0729 14:02:26.760401 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:02:26.760466 1010714 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:02:26.760478 1010714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:02:26.760553 1010714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:02:26.760649 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:02:26.760671 1010714 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:02:26.760684 1010714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:02:26.760713 1010714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:02:26.760756 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:02:26.760771 1010714 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:02:26.760777 1010714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:02:26.760797 1010714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:02:26.760891 1010714 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.multinode-999945 san=[127.0.0.1 192.168.39.69 localhost minikube multinode-999945]
	I0729 14:02:27.214007 1010714 provision.go:177] copyRemoteCerts
	I0729 14:02:27.214102 1010714 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:02:27.214141 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHHostname
	I0729 14:02:27.216742 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:27.217102 1010714 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 14:02:27.217136 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:27.217313 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHPort
	I0729 14:02:27.217510 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:02:27.217693 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHUsername
	I0729 14:02:27.217806 1010714 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/multinode-999945/id_rsa Username:docker}
	I0729 14:02:27.304837 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 14:02:27.304928 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:02:27.330109 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 14:02:27.330193 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0729 14:02:27.357332 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 14:02:27.357408 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 14:02:27.381578 1010714 provision.go:87] duration metric: took 627.626897ms to configureAuth
	I0729 14:02:27.381603 1010714 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:02:27.381848 1010714 config.go:182] Loaded profile config "multinode-999945": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:02:27.381938 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHHostname
	I0729 14:02:27.384654 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:27.385029 1010714 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 14:02:27.385056 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:27.385190 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHPort
	I0729 14:02:27.385402 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:02:27.385603 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:02:27.385737 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHUsername
	I0729 14:02:27.385916 1010714 main.go:141] libmachine: Using SSH client type: native
	I0729 14:02:27.386074 1010714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 14:02:27.386089 1010714 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:03:58.240500 1010714 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:03:58.240555 1010714 machine.go:97] duration metric: took 1m31.864058512s to provisionDockerMachine
	I0729 14:03:58.240589 1010714 start.go:293] postStartSetup for "multinode-999945" (driver="kvm2")
	I0729 14:03:58.240608 1010714 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:03:58.240645 1010714 main.go:141] libmachine: (multinode-999945) Calling .DriverName
	I0729 14:03:58.241032 1010714 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:03:58.241076 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHHostname
	I0729 14:03:58.244042 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:03:58.244522 1010714 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 14:03:58.244547 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:03:58.244700 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHPort
	I0729 14:03:58.244909 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:03:58.245124 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHUsername
	I0729 14:03:58.245278 1010714 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/multinode-999945/id_rsa Username:docker}
	I0729 14:03:58.332453 1010714 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:03:58.336682 1010714 command_runner.go:130] > NAME=Buildroot
	I0729 14:03:58.336708 1010714 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0729 14:03:58.336714 1010714 command_runner.go:130] > ID=buildroot
	I0729 14:03:58.336721 1010714 command_runner.go:130] > VERSION_ID=2023.02.9
	I0729 14:03:58.336729 1010714 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0729 14:03:58.336772 1010714 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:03:58.336787 1010714 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:03:58.336871 1010714 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:03:58.336945 1010714 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:03:58.336954 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> /etc/ssl/certs/9820462.pem
	I0729 14:03:58.337043 1010714 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:03:58.346437 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:03:58.369887 1010714 start.go:296] duration metric: took 129.282129ms for postStartSetup
	I0729 14:03:58.369929 1010714 fix.go:56] duration metric: took 1m32.014932766s for fixHost
	I0729 14:03:58.369964 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHHostname
	I0729 14:03:58.372701 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:03:58.373045 1010714 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 14:03:58.373076 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:03:58.373219 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHPort
	I0729 14:03:58.373426 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:03:58.373616 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:03:58.373764 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHUsername
	I0729 14:03:58.373932 1010714 main.go:141] libmachine: Using SSH client type: native
	I0729 14:03:58.374095 1010714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 14:03:58.374106 1010714 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:03:58.480896 1010714 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722261838.452380819
	
	I0729 14:03:58.480930 1010714 fix.go:216] guest clock: 1722261838.452380819
	I0729 14:03:58.480943 1010714 fix.go:229] Guest: 2024-07-29 14:03:58.452380819 +0000 UTC Remote: 2024-07-29 14:03:58.369934649 +0000 UTC m=+92.145841276 (delta=82.44617ms)
	I0729 14:03:58.480974 1010714 fix.go:200] guest clock delta is within tolerance: 82.44617ms
	I0729 14:03:58.480983 1010714 start.go:83] releasing machines lock for "multinode-999945", held for 1m32.126000528s
	I0729 14:03:58.481016 1010714 main.go:141] libmachine: (multinode-999945) Calling .DriverName
	I0729 14:03:58.481286 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetIP
	I0729 14:03:58.483590 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:03:58.483966 1010714 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 14:03:58.483998 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:03:58.484101 1010714 main.go:141] libmachine: (multinode-999945) Calling .DriverName
	I0729 14:03:58.484649 1010714 main.go:141] libmachine: (multinode-999945) Calling .DriverName
	I0729 14:03:58.484820 1010714 main.go:141] libmachine: (multinode-999945) Calling .DriverName
	I0729 14:03:58.484933 1010714 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:03:58.484984 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHHostname
	I0729 14:03:58.485049 1010714 ssh_runner.go:195] Run: cat /version.json
	I0729 14:03:58.485074 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHHostname
	I0729 14:03:58.487436 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:03:58.487746 1010714 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 14:03:58.487773 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:03:58.487866 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:03:58.487909 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHPort
	I0729 14:03:58.488059 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:03:58.488212 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHUsername
	I0729 14:03:58.488386 1010714 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/multinode-999945/id_rsa Username:docker}
	I0729 14:03:58.488395 1010714 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 14:03:58.488446 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:03:58.488605 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHPort
	I0729 14:03:58.488783 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:03:58.488941 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHUsername
	I0729 14:03:58.489106 1010714 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/multinode-999945/id_rsa Username:docker}
	I0729 14:03:58.583653 1010714 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0729 14:03:58.584180 1010714 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0729 14:03:58.584336 1010714 ssh_runner.go:195] Run: systemctl --version
	I0729 14:03:58.589970 1010714 command_runner.go:130] > systemd 252 (252)
	I0729 14:03:58.590002 1010714 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0729 14:03:58.590289 1010714 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:03:58.752732 1010714 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0729 14:03:58.758928 1010714 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0729 14:03:58.759014 1010714 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:03:58.759087 1010714 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:03:58.768850 1010714 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 14:03:58.768878 1010714 start.go:495] detecting cgroup driver to use...
	I0729 14:03:58.768954 1010714 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:03:58.785852 1010714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:03:58.799185 1010714 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:03:58.799248 1010714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:03:58.812998 1010714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:03:58.826563 1010714 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:03:58.965142 1010714 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:03:59.104459 1010714 docker.go:233] disabling docker service ...
	I0729 14:03:59.104540 1010714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:03:59.121095 1010714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:03:59.134818 1010714 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:03:59.272654 1010714 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:03:59.412442 1010714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:03:59.426844 1010714 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:03:59.445910 1010714 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0729 14:03:59.445970 1010714 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 14:03:59.446025 1010714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:03:59.457704 1010714 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:03:59.457783 1010714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:03:59.469084 1010714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:03:59.479843 1010714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:03:59.490545 1010714 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:03:59.501469 1010714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:03:59.512697 1010714 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:03:59.523067 1010714 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:03:59.534154 1010714 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:03:59.543908 1010714 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0729 14:03:59.543998 1010714 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:03:59.553776 1010714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:03:59.687977 1010714 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:03:59.927093 1010714 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:03:59.927166 1010714 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:03:59.932043 1010714 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0729 14:03:59.932064 1010714 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0729 14:03:59.932070 1010714 command_runner.go:130] > Device: 0,22	Inode: 1343        Links: 1
	I0729 14:03:59.932077 1010714 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 14:03:59.932082 1010714 command_runner.go:130] > Access: 2024-07-29 14:03:59.796578763 +0000
	I0729 14:03:59.932088 1010714 command_runner.go:130] > Modify: 2024-07-29 14:03:59.796578763 +0000
	I0729 14:03:59.932092 1010714 command_runner.go:130] > Change: 2024-07-29 14:03:59.796578763 +0000
	I0729 14:03:59.932096 1010714 command_runner.go:130] >  Birth: -
	I0729 14:03:59.932216 1010714 start.go:563] Will wait 60s for crictl version
	I0729 14:03:59.932285 1010714 ssh_runner.go:195] Run: which crictl
	I0729 14:03:59.936785 1010714 command_runner.go:130] > /usr/bin/crictl
	I0729 14:03:59.936866 1010714 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:03:59.978114 1010714 command_runner.go:130] > Version:  0.1.0
	I0729 14:03:59.978144 1010714 command_runner.go:130] > RuntimeName:  cri-o
	I0729 14:03:59.978150 1010714 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0729 14:03:59.978158 1010714 command_runner.go:130] > RuntimeApiVersion:  v1
	I0729 14:03:59.978239 1010714 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:03:59.978332 1010714 ssh_runner.go:195] Run: crio --version
	I0729 14:04:00.006746 1010714 command_runner.go:130] > crio version 1.29.1
	I0729 14:04:00.006773 1010714 command_runner.go:130] > Version:        1.29.1
	I0729 14:04:00.006780 1010714 command_runner.go:130] > GitCommit:      unknown
	I0729 14:04:00.006785 1010714 command_runner.go:130] > GitCommitDate:  unknown
	I0729 14:04:00.006789 1010714 command_runner.go:130] > GitTreeState:   clean
	I0729 14:04:00.006796 1010714 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 14:04:00.006800 1010714 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 14:04:00.006804 1010714 command_runner.go:130] > Compiler:       gc
	I0729 14:04:00.006808 1010714 command_runner.go:130] > Platform:       linux/amd64
	I0729 14:04:00.006812 1010714 command_runner.go:130] > Linkmode:       dynamic
	I0729 14:04:00.006839 1010714 command_runner.go:130] > BuildTags:      
	I0729 14:04:00.006848 1010714 command_runner.go:130] >   containers_image_ostree_stub
	I0729 14:04:00.006853 1010714 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 14:04:00.006863 1010714 command_runner.go:130] >   btrfs_noversion
	I0729 14:04:00.006867 1010714 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 14:04:00.006877 1010714 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 14:04:00.006883 1010714 command_runner.go:130] >   seccomp
	I0729 14:04:00.006888 1010714 command_runner.go:130] > LDFlags:          unknown
	I0729 14:04:00.006894 1010714 command_runner.go:130] > SeccompEnabled:   true
	I0729 14:04:00.006898 1010714 command_runner.go:130] > AppArmorEnabled:  false
	I0729 14:04:00.006972 1010714 ssh_runner.go:195] Run: crio --version
	I0729 14:04:00.035423 1010714 command_runner.go:130] > crio version 1.29.1
	I0729 14:04:00.035457 1010714 command_runner.go:130] > Version:        1.29.1
	I0729 14:04:00.035463 1010714 command_runner.go:130] > GitCommit:      unknown
	I0729 14:04:00.035467 1010714 command_runner.go:130] > GitCommitDate:  unknown
	I0729 14:04:00.035471 1010714 command_runner.go:130] > GitTreeState:   clean
	I0729 14:04:00.035477 1010714 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 14:04:00.035481 1010714 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 14:04:00.035485 1010714 command_runner.go:130] > Compiler:       gc
	I0729 14:04:00.035490 1010714 command_runner.go:130] > Platform:       linux/amd64
	I0729 14:04:00.035494 1010714 command_runner.go:130] > Linkmode:       dynamic
	I0729 14:04:00.035499 1010714 command_runner.go:130] > BuildTags:      
	I0729 14:04:00.035506 1010714 command_runner.go:130] >   containers_image_ostree_stub
	I0729 14:04:00.035510 1010714 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 14:04:00.035514 1010714 command_runner.go:130] >   btrfs_noversion
	I0729 14:04:00.035520 1010714 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 14:04:00.035524 1010714 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 14:04:00.035530 1010714 command_runner.go:130] >   seccomp
	I0729 14:04:00.035534 1010714 command_runner.go:130] > LDFlags:          unknown
	I0729 14:04:00.035538 1010714 command_runner.go:130] > SeccompEnabled:   true
	I0729 14:04:00.035541 1010714 command_runner.go:130] > AppArmorEnabled:  false
	I0729 14:04:00.038142 1010714 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 14:04:00.039362 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetIP
	I0729 14:04:00.041797 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:04:00.042132 1010714 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 14:04:00.042153 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:04:00.042364 1010714 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 14:04:00.046537 1010714 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0729 14:04:00.046651 1010714 kubeadm.go:883] updating cluster {Name:multinode-999945 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-999945 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.130 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.113 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:04:00.046806 1010714 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 14:04:00.046857 1010714 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:04:00.098123 1010714 command_runner.go:130] > {
	I0729 14:04:00.098155 1010714 command_runner.go:130] >   "images": [
	I0729 14:04:00.098161 1010714 command_runner.go:130] >     {
	I0729 14:04:00.098172 1010714 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 14:04:00.098178 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.098186 1010714 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 14:04:00.098191 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.098196 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.098208 1010714 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 14:04:00.098222 1010714 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 14:04:00.098228 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.098236 1010714 command_runner.go:130] >       "size": "87165492",
	I0729 14:04:00.098254 1010714 command_runner.go:130] >       "uid": null,
	I0729 14:04:00.098263 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.098273 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.098282 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.098288 1010714 command_runner.go:130] >     },
	I0729 14:04:00.098296 1010714 command_runner.go:130] >     {
	I0729 14:04:00.098306 1010714 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 14:04:00.098315 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.098330 1010714 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 14:04:00.098339 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.098346 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.098359 1010714 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 14:04:00.098374 1010714 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 14:04:00.098383 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.098392 1010714 command_runner.go:130] >       "size": "87174707",
	I0729 14:04:00.098402 1010714 command_runner.go:130] >       "uid": null,
	I0729 14:04:00.098416 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.098425 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.098433 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.098441 1010714 command_runner.go:130] >     },
	I0729 14:04:00.098446 1010714 command_runner.go:130] >     {
	I0729 14:04:00.098460 1010714 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 14:04:00.098469 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.098480 1010714 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 14:04:00.098488 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.098496 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.098510 1010714 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 14:04:00.098521 1010714 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 14:04:00.098526 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.098537 1010714 command_runner.go:130] >       "size": "1363676",
	I0729 14:04:00.098546 1010714 command_runner.go:130] >       "uid": null,
	I0729 14:04:00.098553 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.098561 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.098569 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.098577 1010714 command_runner.go:130] >     },
	I0729 14:04:00.098584 1010714 command_runner.go:130] >     {
	I0729 14:04:00.098605 1010714 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 14:04:00.098615 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.098623 1010714 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 14:04:00.098628 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.098634 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.098648 1010714 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 14:04:00.098672 1010714 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 14:04:00.098680 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.098688 1010714 command_runner.go:130] >       "size": "31470524",
	I0729 14:04:00.098697 1010714 command_runner.go:130] >       "uid": null,
	I0729 14:04:00.098704 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.098713 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.098720 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.098727 1010714 command_runner.go:130] >     },
	I0729 14:04:00.098734 1010714 command_runner.go:130] >     {
	I0729 14:04:00.098747 1010714 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 14:04:00.098767 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.098781 1010714 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 14:04:00.098789 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.098797 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.098811 1010714 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 14:04:00.098826 1010714 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 14:04:00.098835 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.098846 1010714 command_runner.go:130] >       "size": "61245718",
	I0729 14:04:00.098854 1010714 command_runner.go:130] >       "uid": null,
	I0729 14:04:00.098862 1010714 command_runner.go:130] >       "username": "nonroot",
	I0729 14:04:00.098872 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.098880 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.098886 1010714 command_runner.go:130] >     },
	I0729 14:04:00.098894 1010714 command_runner.go:130] >     {
	I0729 14:04:00.098905 1010714 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 14:04:00.098914 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.098924 1010714 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 14:04:00.098932 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.098939 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.098953 1010714 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 14:04:00.098975 1010714 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 14:04:00.098983 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.098991 1010714 command_runner.go:130] >       "size": "150779692",
	I0729 14:04:00.099000 1010714 command_runner.go:130] >       "uid": {
	I0729 14:04:00.099007 1010714 command_runner.go:130] >         "value": "0"
	I0729 14:04:00.099016 1010714 command_runner.go:130] >       },
	I0729 14:04:00.099024 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.099033 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.099041 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.099050 1010714 command_runner.go:130] >     },
	I0729 14:04:00.099056 1010714 command_runner.go:130] >     {
	I0729 14:04:00.099069 1010714 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 14:04:00.099079 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.099089 1010714 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 14:04:00.099097 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.099104 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.099119 1010714 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 14:04:00.099134 1010714 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 14:04:00.099142 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.099150 1010714 command_runner.go:130] >       "size": "117609954",
	I0729 14:04:00.099158 1010714 command_runner.go:130] >       "uid": {
	I0729 14:04:00.099166 1010714 command_runner.go:130] >         "value": "0"
	I0729 14:04:00.099174 1010714 command_runner.go:130] >       },
	I0729 14:04:00.099180 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.099187 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.099197 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.099203 1010714 command_runner.go:130] >     },
	I0729 14:04:00.099212 1010714 command_runner.go:130] >     {
	I0729 14:04:00.099223 1010714 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 14:04:00.099232 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.099241 1010714 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 14:04:00.099249 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.099257 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.099288 1010714 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 14:04:00.099304 1010714 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 14:04:00.099313 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.099328 1010714 command_runner.go:130] >       "size": "112198984",
	I0729 14:04:00.099339 1010714 command_runner.go:130] >       "uid": {
	I0729 14:04:00.099345 1010714 command_runner.go:130] >         "value": "0"
	I0729 14:04:00.099350 1010714 command_runner.go:130] >       },
	I0729 14:04:00.099356 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.099362 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.099368 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.099373 1010714 command_runner.go:130] >     },
	I0729 14:04:00.099377 1010714 command_runner.go:130] >     {
	I0729 14:04:00.099384 1010714 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 14:04:00.099388 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.099392 1010714 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 14:04:00.099395 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.099399 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.099406 1010714 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 14:04:00.099412 1010714 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 14:04:00.099416 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.099419 1010714 command_runner.go:130] >       "size": "85953945",
	I0729 14:04:00.099423 1010714 command_runner.go:130] >       "uid": null,
	I0729 14:04:00.099426 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.099430 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.099433 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.099436 1010714 command_runner.go:130] >     },
	I0729 14:04:00.099439 1010714 command_runner.go:130] >     {
	I0729 14:04:00.099445 1010714 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 14:04:00.099449 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.099453 1010714 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 14:04:00.099456 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.099460 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.099467 1010714 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 14:04:00.099476 1010714 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 14:04:00.099479 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.099483 1010714 command_runner.go:130] >       "size": "63051080",
	I0729 14:04:00.099487 1010714 command_runner.go:130] >       "uid": {
	I0729 14:04:00.099491 1010714 command_runner.go:130] >         "value": "0"
	I0729 14:04:00.099494 1010714 command_runner.go:130] >       },
	I0729 14:04:00.099501 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.099507 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.099511 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.099514 1010714 command_runner.go:130] >     },
	I0729 14:04:00.099518 1010714 command_runner.go:130] >     {
	I0729 14:04:00.099526 1010714 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 14:04:00.099533 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.099537 1010714 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 14:04:00.099543 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.099548 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.099554 1010714 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 14:04:00.099562 1010714 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 14:04:00.099566 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.099572 1010714 command_runner.go:130] >       "size": "750414",
	I0729 14:04:00.099576 1010714 command_runner.go:130] >       "uid": {
	I0729 14:04:00.099581 1010714 command_runner.go:130] >         "value": "65535"
	I0729 14:04:00.099584 1010714 command_runner.go:130] >       },
	I0729 14:04:00.099590 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.099594 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.099597 1010714 command_runner.go:130] >       "pinned": true
	I0729 14:04:00.099601 1010714 command_runner.go:130] >     }
	I0729 14:04:00.099604 1010714 command_runner.go:130] >   ]
	I0729 14:04:00.099607 1010714 command_runner.go:130] > }
	I0729 14:04:00.099810 1010714 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 14:04:00.099822 1010714 crio.go:433] Images already preloaded, skipping extraction
	I0729 14:04:00.099885 1010714 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:04:00.137036 1010714 command_runner.go:130] > {
	I0729 14:04:00.137061 1010714 command_runner.go:130] >   "images": [
	I0729 14:04:00.137066 1010714 command_runner.go:130] >     {
	I0729 14:04:00.137074 1010714 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 14:04:00.137079 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.137085 1010714 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 14:04:00.137088 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.137093 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.137105 1010714 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 14:04:00.137129 1010714 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 14:04:00.137144 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.137154 1010714 command_runner.go:130] >       "size": "87165492",
	I0729 14:04:00.137167 1010714 command_runner.go:130] >       "uid": null,
	I0729 14:04:00.137175 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.137183 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.137189 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.137193 1010714 command_runner.go:130] >     },
	I0729 14:04:00.137199 1010714 command_runner.go:130] >     {
	I0729 14:04:00.137208 1010714 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 14:04:00.137218 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.137230 1010714 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 14:04:00.137236 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.137243 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.137258 1010714 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 14:04:00.137270 1010714 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 14:04:00.137276 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.137280 1010714 command_runner.go:130] >       "size": "87174707",
	I0729 14:04:00.137284 1010714 command_runner.go:130] >       "uid": null,
	I0729 14:04:00.137295 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.137304 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.137313 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.137321 1010714 command_runner.go:130] >     },
	I0729 14:04:00.137327 1010714 command_runner.go:130] >     {
	I0729 14:04:00.137339 1010714 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 14:04:00.137348 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.137356 1010714 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 14:04:00.137376 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.137386 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.137401 1010714 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 14:04:00.137416 1010714 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 14:04:00.137425 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.137432 1010714 command_runner.go:130] >       "size": "1363676",
	I0729 14:04:00.137441 1010714 command_runner.go:130] >       "uid": null,
	I0729 14:04:00.137448 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.137461 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.137473 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.137481 1010714 command_runner.go:130] >     },
	I0729 14:04:00.137489 1010714 command_runner.go:130] >     {
	I0729 14:04:00.137502 1010714 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 14:04:00.137508 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.137520 1010714 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 14:04:00.137528 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.137535 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.137550 1010714 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 14:04:00.137576 1010714 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 14:04:00.137584 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.137590 1010714 command_runner.go:130] >       "size": "31470524",
	I0729 14:04:00.137599 1010714 command_runner.go:130] >       "uid": null,
	I0729 14:04:00.137606 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.137616 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.137622 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.137630 1010714 command_runner.go:130] >     },
	I0729 14:04:00.137636 1010714 command_runner.go:130] >     {
	I0729 14:04:00.137649 1010714 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 14:04:00.137657 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.137665 1010714 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 14:04:00.137671 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.137675 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.137687 1010714 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 14:04:00.137701 1010714 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 14:04:00.137712 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.137722 1010714 command_runner.go:130] >       "size": "61245718",
	I0729 14:04:00.137728 1010714 command_runner.go:130] >       "uid": null,
	I0729 14:04:00.137738 1010714 command_runner.go:130] >       "username": "nonroot",
	I0729 14:04:00.137745 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.137754 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.137763 1010714 command_runner.go:130] >     },
	I0729 14:04:00.137770 1010714 command_runner.go:130] >     {
	I0729 14:04:00.137776 1010714 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 14:04:00.137784 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.137792 1010714 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 14:04:00.137812 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.137821 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.137835 1010714 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 14:04:00.137849 1010714 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 14:04:00.137857 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.137864 1010714 command_runner.go:130] >       "size": "150779692",
	I0729 14:04:00.137872 1010714 command_runner.go:130] >       "uid": {
	I0729 14:04:00.137878 1010714 command_runner.go:130] >         "value": "0"
	I0729 14:04:00.137886 1010714 command_runner.go:130] >       },
	I0729 14:04:00.137896 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.137905 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.137915 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.137923 1010714 command_runner.go:130] >     },
	I0729 14:04:00.137931 1010714 command_runner.go:130] >     {
	I0729 14:04:00.137941 1010714 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 14:04:00.137982 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.138006 1010714 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 14:04:00.138012 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.138022 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.138036 1010714 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 14:04:00.138050 1010714 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 14:04:00.138059 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.138066 1010714 command_runner.go:130] >       "size": "117609954",
	I0729 14:04:00.138075 1010714 command_runner.go:130] >       "uid": {
	I0729 14:04:00.138082 1010714 command_runner.go:130] >         "value": "0"
	I0729 14:04:00.138086 1010714 command_runner.go:130] >       },
	I0729 14:04:00.138095 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.138105 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.138113 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.138121 1010714 command_runner.go:130] >     },
	I0729 14:04:00.138127 1010714 command_runner.go:130] >     {
	I0729 14:04:00.138141 1010714 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 14:04:00.138151 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.138162 1010714 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 14:04:00.138170 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.138178 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.138214 1010714 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 14:04:00.138230 1010714 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 14:04:00.138239 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.138246 1010714 command_runner.go:130] >       "size": "112198984",
	I0729 14:04:00.138254 1010714 command_runner.go:130] >       "uid": {
	I0729 14:04:00.138261 1010714 command_runner.go:130] >         "value": "0"
	I0729 14:04:00.138268 1010714 command_runner.go:130] >       },
	I0729 14:04:00.138275 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.138284 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.138290 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.138295 1010714 command_runner.go:130] >     },
	I0729 14:04:00.138298 1010714 command_runner.go:130] >     {
	I0729 14:04:00.138309 1010714 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 14:04:00.138318 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.138326 1010714 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 14:04:00.138335 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.138342 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.138355 1010714 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 14:04:00.138372 1010714 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 14:04:00.138380 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.138387 1010714 command_runner.go:130] >       "size": "85953945",
	I0729 14:04:00.138392 1010714 command_runner.go:130] >       "uid": null,
	I0729 14:04:00.138401 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.138411 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.138417 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.138426 1010714 command_runner.go:130] >     },
	I0729 14:04:00.138434 1010714 command_runner.go:130] >     {
	I0729 14:04:00.138444 1010714 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 14:04:00.138453 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.138465 1010714 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 14:04:00.138473 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.138481 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.138489 1010714 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 14:04:00.138503 1010714 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 14:04:00.138512 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.138518 1010714 command_runner.go:130] >       "size": "63051080",
	I0729 14:04:00.138529 1010714 command_runner.go:130] >       "uid": {
	I0729 14:04:00.138537 1010714 command_runner.go:130] >         "value": "0"
	I0729 14:04:00.138546 1010714 command_runner.go:130] >       },
	I0729 14:04:00.138555 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.138564 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.138572 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.138579 1010714 command_runner.go:130] >     },
	I0729 14:04:00.138583 1010714 command_runner.go:130] >     {
	I0729 14:04:00.138589 1010714 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 14:04:00.138599 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.138610 1010714 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 14:04:00.138615 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.138625 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.138639 1010714 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 14:04:00.138653 1010714 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 14:04:00.138661 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.138671 1010714 command_runner.go:130] >       "size": "750414",
	I0729 14:04:00.138678 1010714 command_runner.go:130] >       "uid": {
	I0729 14:04:00.138682 1010714 command_runner.go:130] >         "value": "65535"
	I0729 14:04:00.138689 1010714 command_runner.go:130] >       },
	I0729 14:04:00.138696 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.138705 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.138711 1010714 command_runner.go:130] >       "pinned": true
	I0729 14:04:00.138720 1010714 command_runner.go:130] >     }
	I0729 14:04:00.138726 1010714 command_runner.go:130] >   ]
	I0729 14:04:00.138734 1010714 command_runner.go:130] > }
	I0729 14:04:00.138897 1010714 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 14:04:00.138913 1010714 cache_images.go:84] Images are preloaded, skipping loading
	I0729 14:04:00.138921 1010714 kubeadm.go:934] updating node { 192.168.39.69 8443 v1.30.3 crio true true} ...
	I0729 14:04:00.139077 1010714 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-999945 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-999945 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:04:00.139166 1010714 ssh_runner.go:195] Run: crio config
	I0729 14:04:00.180804 1010714 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0729 14:04:00.180836 1010714 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0729 14:04:00.180843 1010714 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0729 14:04:00.180846 1010714 command_runner.go:130] > #
	I0729 14:04:00.180864 1010714 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0729 14:04:00.180870 1010714 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0729 14:04:00.180876 1010714 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0729 14:04:00.180892 1010714 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0729 14:04:00.180899 1010714 command_runner.go:130] > # reload'.
	I0729 14:04:00.180907 1010714 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0729 14:04:00.180926 1010714 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0729 14:04:00.180942 1010714 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0729 14:04:00.180950 1010714 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0729 14:04:00.180955 1010714 command_runner.go:130] > [crio]
	I0729 14:04:00.180964 1010714 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0729 14:04:00.180969 1010714 command_runner.go:130] > # containers images, in this directory.
	I0729 14:04:00.180975 1010714 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0729 14:04:00.180984 1010714 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0729 14:04:00.181109 1010714 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0729 14:04:00.181139 1010714 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0729 14:04:00.181499 1010714 command_runner.go:130] > # imagestore = ""
	I0729 14:04:00.181521 1010714 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0729 14:04:00.181532 1010714 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0729 14:04:00.181643 1010714 command_runner.go:130] > storage_driver = "overlay"
	I0729 14:04:00.181662 1010714 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0729 14:04:00.181673 1010714 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0729 14:04:00.181680 1010714 command_runner.go:130] > storage_option = [
	I0729 14:04:00.181776 1010714 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0729 14:04:00.181873 1010714 command_runner.go:130] > ]
	I0729 14:04:00.181894 1010714 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0729 14:04:00.181916 1010714 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0729 14:04:00.182117 1010714 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0729 14:04:00.182131 1010714 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0729 14:04:00.182141 1010714 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0729 14:04:00.182148 1010714 command_runner.go:130] > # always happen on a node reboot
	I0729 14:04:00.182462 1010714 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0729 14:04:00.182485 1010714 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0729 14:04:00.182495 1010714 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0729 14:04:00.182506 1010714 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0729 14:04:00.182624 1010714 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0729 14:04:00.182640 1010714 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0729 14:04:00.182652 1010714 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0729 14:04:00.182907 1010714 command_runner.go:130] > # internal_wipe = true
	I0729 14:04:00.182922 1010714 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0729 14:04:00.182929 1010714 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0729 14:04:00.183180 1010714 command_runner.go:130] > # internal_repair = false
	I0729 14:04:00.183190 1010714 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0729 14:04:00.183196 1010714 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0729 14:04:00.183202 1010714 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0729 14:04:00.183535 1010714 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0729 14:04:00.183547 1010714 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0729 14:04:00.183550 1010714 command_runner.go:130] > [crio.api]
	I0729 14:04:00.183556 1010714 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0729 14:04:00.184018 1010714 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0729 14:04:00.184028 1010714 command_runner.go:130] > # IP address on which the stream server will listen.
	I0729 14:04:00.184464 1010714 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0729 14:04:00.184484 1010714 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0729 14:04:00.184493 1010714 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0729 14:04:00.184790 1010714 command_runner.go:130] > # stream_port = "0"
	I0729 14:04:00.184800 1010714 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0729 14:04:00.185093 1010714 command_runner.go:130] > # stream_enable_tls = false
	I0729 14:04:00.185103 1010714 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0729 14:04:00.185422 1010714 command_runner.go:130] > # stream_idle_timeout = ""
	I0729 14:04:00.185442 1010714 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0729 14:04:00.185452 1010714 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0729 14:04:00.185461 1010714 command_runner.go:130] > # minutes.
	I0729 14:04:00.185669 1010714 command_runner.go:130] > # stream_tls_cert = ""
	I0729 14:04:00.185695 1010714 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0729 14:04:00.185705 1010714 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0729 14:04:00.185894 1010714 command_runner.go:130] > # stream_tls_key = ""
	I0729 14:04:00.185906 1010714 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0729 14:04:00.185915 1010714 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0729 14:04:00.185937 1010714 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0729 14:04:00.186099 1010714 command_runner.go:130] > # stream_tls_ca = ""
	I0729 14:04:00.186111 1010714 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 14:04:00.186266 1010714 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0729 14:04:00.186283 1010714 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 14:04:00.186485 1010714 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0729 14:04:00.186502 1010714 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0729 14:04:00.186514 1010714 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0729 14:04:00.186524 1010714 command_runner.go:130] > [crio.runtime]
	I0729 14:04:00.186536 1010714 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0729 14:04:00.186547 1010714 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0729 14:04:00.186556 1010714 command_runner.go:130] > # "nofile=1024:2048"
	I0729 14:04:00.186567 1010714 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0729 14:04:00.186681 1010714 command_runner.go:130] > # default_ulimits = [
	I0729 14:04:00.186811 1010714 command_runner.go:130] > # ]
	I0729 14:04:00.186826 1010714 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0729 14:04:00.187097 1010714 command_runner.go:130] > # no_pivot = false
	I0729 14:04:00.187112 1010714 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0729 14:04:00.187125 1010714 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0729 14:04:00.187450 1010714 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0729 14:04:00.187465 1010714 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0729 14:04:00.187473 1010714 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0729 14:04:00.187483 1010714 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 14:04:00.187605 1010714 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0729 14:04:00.187617 1010714 command_runner.go:130] > # Cgroup setting for conmon
	I0729 14:04:00.187628 1010714 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0729 14:04:00.187744 1010714 command_runner.go:130] > conmon_cgroup = "pod"
	I0729 14:04:00.187760 1010714 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0729 14:04:00.187767 1010714 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0729 14:04:00.187778 1010714 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 14:04:00.187787 1010714 command_runner.go:130] > conmon_env = [
	I0729 14:04:00.187844 1010714 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 14:04:00.187911 1010714 command_runner.go:130] > ]
	I0729 14:04:00.187924 1010714 command_runner.go:130] > # Additional environment variables to set for all the
	I0729 14:04:00.187932 1010714 command_runner.go:130] > # containers. These are overridden if set in the
	I0729 14:04:00.187943 1010714 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0729 14:04:00.188047 1010714 command_runner.go:130] > # default_env = [
	I0729 14:04:00.188170 1010714 command_runner.go:130] > # ]
	I0729 14:04:00.188185 1010714 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0729 14:04:00.188197 1010714 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0729 14:04:00.189805 1010714 command_runner.go:130] > # selinux = false
	I0729 14:04:00.189828 1010714 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0729 14:04:00.189836 1010714 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0729 14:04:00.189844 1010714 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0729 14:04:00.189849 1010714 command_runner.go:130] > # seccomp_profile = ""
	I0729 14:04:00.189854 1010714 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0729 14:04:00.189859 1010714 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0729 14:04:00.189865 1010714 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0729 14:04:00.189870 1010714 command_runner.go:130] > # which might increase security.
	I0729 14:04:00.189875 1010714 command_runner.go:130] > # This option is currently deprecated,
	I0729 14:04:00.189884 1010714 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0729 14:04:00.189892 1010714 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0729 14:04:00.189899 1010714 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0729 14:04:00.189907 1010714 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0729 14:04:00.189915 1010714 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0729 14:04:00.189922 1010714 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0729 14:04:00.189929 1010714 command_runner.go:130] > # This option supports live configuration reload.
	I0729 14:04:00.189934 1010714 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0729 14:04:00.189941 1010714 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0729 14:04:00.189948 1010714 command_runner.go:130] > # the cgroup blockio controller.
	I0729 14:04:00.189952 1010714 command_runner.go:130] > # blockio_config_file = ""
	I0729 14:04:00.189960 1010714 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0729 14:04:00.189966 1010714 command_runner.go:130] > # blockio parameters.
	I0729 14:04:00.189971 1010714 command_runner.go:130] > # blockio_reload = false
	I0729 14:04:00.189979 1010714 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0729 14:04:00.189983 1010714 command_runner.go:130] > # irqbalance daemon.
	I0729 14:04:00.189991 1010714 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0729 14:04:00.189999 1010714 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0729 14:04:00.190006 1010714 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0729 14:04:00.190014 1010714 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0729 14:04:00.190029 1010714 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0729 14:04:00.190037 1010714 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0729 14:04:00.190044 1010714 command_runner.go:130] > # This option supports live configuration reload.
	I0729 14:04:00.190048 1010714 command_runner.go:130] > # rdt_config_file = ""
	I0729 14:04:00.190056 1010714 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0729 14:04:00.190060 1010714 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0729 14:04:00.190077 1010714 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0729 14:04:00.190084 1010714 command_runner.go:130] > # separate_pull_cgroup = ""
	I0729 14:04:00.190090 1010714 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0729 14:04:00.190099 1010714 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0729 14:04:00.190105 1010714 command_runner.go:130] > # will be added.
	I0729 14:04:00.190111 1010714 command_runner.go:130] > # default_capabilities = [
	I0729 14:04:00.190116 1010714 command_runner.go:130] > # 	"CHOWN",
	I0729 14:04:00.190120 1010714 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0729 14:04:00.190126 1010714 command_runner.go:130] > # 	"FSETID",
	I0729 14:04:00.190130 1010714 command_runner.go:130] > # 	"FOWNER",
	I0729 14:04:00.190136 1010714 command_runner.go:130] > # 	"SETGID",
	I0729 14:04:00.190140 1010714 command_runner.go:130] > # 	"SETUID",
	I0729 14:04:00.190145 1010714 command_runner.go:130] > # 	"SETPCAP",
	I0729 14:04:00.190149 1010714 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0729 14:04:00.190152 1010714 command_runner.go:130] > # 	"KILL",
	I0729 14:04:00.190156 1010714 command_runner.go:130] > # ]
	I0729 14:04:00.190164 1010714 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0729 14:04:00.190172 1010714 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0729 14:04:00.190177 1010714 command_runner.go:130] > # add_inheritable_capabilities = false
	I0729 14:04:00.190184 1010714 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0729 14:04:00.190192 1010714 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 14:04:00.190197 1010714 command_runner.go:130] > default_sysctls = [
	I0729 14:04:00.190201 1010714 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0729 14:04:00.190207 1010714 command_runner.go:130] > ]
	I0729 14:04:00.190212 1010714 command_runner.go:130] > # List of devices on the host that a
	I0729 14:04:00.190221 1010714 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0729 14:04:00.190227 1010714 command_runner.go:130] > # allowed_devices = [
	I0729 14:04:00.190231 1010714 command_runner.go:130] > # 	"/dev/fuse",
	I0729 14:04:00.190234 1010714 command_runner.go:130] > # ]
	I0729 14:04:00.190239 1010714 command_runner.go:130] > # List of additional devices. specified as
	I0729 14:04:00.190248 1010714 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0729 14:04:00.190253 1010714 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0729 14:04:00.190259 1010714 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 14:04:00.190264 1010714 command_runner.go:130] > # additional_devices = [
	I0729 14:04:00.190267 1010714 command_runner.go:130] > # ]
	I0729 14:04:00.190274 1010714 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0729 14:04:00.190281 1010714 command_runner.go:130] > # cdi_spec_dirs = [
	I0729 14:04:00.190286 1010714 command_runner.go:130] > # 	"/etc/cdi",
	I0729 14:04:00.190290 1010714 command_runner.go:130] > # 	"/var/run/cdi",
	I0729 14:04:00.190295 1010714 command_runner.go:130] > # ]
	I0729 14:04:00.190302 1010714 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0729 14:04:00.190309 1010714 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0729 14:04:00.190314 1010714 command_runner.go:130] > # Defaults to false.
	I0729 14:04:00.190319 1010714 command_runner.go:130] > # device_ownership_from_security_context = false
	I0729 14:04:00.190328 1010714 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0729 14:04:00.190334 1010714 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0729 14:04:00.190340 1010714 command_runner.go:130] > # hooks_dir = [
	I0729 14:04:00.190344 1010714 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0729 14:04:00.190348 1010714 command_runner.go:130] > # ]
	I0729 14:04:00.190356 1010714 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0729 14:04:00.190362 1010714 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0729 14:04:00.190369 1010714 command_runner.go:130] > # its default mounts from the following two files:
	I0729 14:04:00.190373 1010714 command_runner.go:130] > #
	I0729 14:04:00.190379 1010714 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0729 14:04:00.190387 1010714 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0729 14:04:00.190394 1010714 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0729 14:04:00.190397 1010714 command_runner.go:130] > #
	I0729 14:04:00.190403 1010714 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0729 14:04:00.190411 1010714 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0729 14:04:00.190419 1010714 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0729 14:04:00.190426 1010714 command_runner.go:130] > #      only add mounts it finds in this file.
	I0729 14:04:00.190429 1010714 command_runner.go:130] > #
	I0729 14:04:00.190433 1010714 command_runner.go:130] > # default_mounts_file = ""
	I0729 14:04:00.190440 1010714 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0729 14:04:00.190446 1010714 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0729 14:04:00.190453 1010714 command_runner.go:130] > pids_limit = 1024
	I0729 14:04:00.190459 1010714 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0729 14:04:00.190467 1010714 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0729 14:04:00.190475 1010714 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0729 14:04:00.190484 1010714 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0729 14:04:00.190490 1010714 command_runner.go:130] > # log_size_max = -1
	I0729 14:04:00.190497 1010714 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0729 14:04:00.190503 1010714 command_runner.go:130] > # log_to_journald = false
	I0729 14:04:00.190512 1010714 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0729 14:04:00.190521 1010714 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0729 14:04:00.190533 1010714 command_runner.go:130] > # Path to directory for container attach sockets.
	I0729 14:04:00.190540 1010714 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0729 14:04:00.190545 1010714 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0729 14:04:00.190551 1010714 command_runner.go:130] > # bind_mount_prefix = ""
	I0729 14:04:00.190557 1010714 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0729 14:04:00.190563 1010714 command_runner.go:130] > # read_only = false
	I0729 14:04:00.190569 1010714 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0729 14:04:00.190577 1010714 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0729 14:04:00.190584 1010714 command_runner.go:130] > # live configuration reload.
	I0729 14:04:00.190588 1010714 command_runner.go:130] > # log_level = "info"
	I0729 14:04:00.190595 1010714 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0729 14:04:00.190600 1010714 command_runner.go:130] > # This option supports live configuration reload.
	I0729 14:04:00.190606 1010714 command_runner.go:130] > # log_filter = ""
	I0729 14:04:00.190611 1010714 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0729 14:04:00.190621 1010714 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0729 14:04:00.190625 1010714 command_runner.go:130] > # separated by comma.
	I0729 14:04:00.190633 1010714 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 14:04:00.190643 1010714 command_runner.go:130] > # uid_mappings = ""
	I0729 14:04:00.190651 1010714 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0729 14:04:00.190658 1010714 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0729 14:04:00.190664 1010714 command_runner.go:130] > # separated by comma.
	I0729 14:04:00.190672 1010714 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 14:04:00.190679 1010714 command_runner.go:130] > # gid_mappings = ""
	I0729 14:04:00.190684 1010714 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0729 14:04:00.190692 1010714 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 14:04:00.190700 1010714 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 14:04:00.190708 1010714 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 14:04:00.190715 1010714 command_runner.go:130] > # minimum_mappable_uid = -1
	I0729 14:04:00.190721 1010714 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0729 14:04:00.190729 1010714 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 14:04:00.190735 1010714 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 14:04:00.190744 1010714 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 14:04:00.190750 1010714 command_runner.go:130] > # minimum_mappable_gid = -1
	I0729 14:04:00.190760 1010714 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0729 14:04:00.190768 1010714 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0729 14:04:00.190773 1010714 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0729 14:04:00.190784 1010714 command_runner.go:130] > # ctr_stop_timeout = 30
	I0729 14:04:00.190792 1010714 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0729 14:04:00.190799 1010714 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0729 14:04:00.190807 1010714 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0729 14:04:00.190814 1010714 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0729 14:04:00.190818 1010714 command_runner.go:130] > drop_infra_ctr = false
	I0729 14:04:00.190827 1010714 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0729 14:04:00.190832 1010714 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0729 14:04:00.190841 1010714 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0729 14:04:00.190845 1010714 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0729 14:04:00.190851 1010714 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0729 14:04:00.190859 1010714 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0729 14:04:00.190864 1010714 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0729 14:04:00.190870 1010714 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0729 14:04:00.190876 1010714 command_runner.go:130] > # shared_cpuset = ""
	I0729 14:04:00.190881 1010714 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0729 14:04:00.190886 1010714 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0729 14:04:00.190891 1010714 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0729 14:04:00.190898 1010714 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0729 14:04:00.190902 1010714 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0729 14:04:00.190907 1010714 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0729 14:04:00.190914 1010714 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0729 14:04:00.190918 1010714 command_runner.go:130] > # enable_criu_support = false
	I0729 14:04:00.190925 1010714 command_runner.go:130] > # Enable/disable the generation of the container,
	I0729 14:04:00.190931 1010714 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0729 14:04:00.190937 1010714 command_runner.go:130] > # enable_pod_events = false
	I0729 14:04:00.190942 1010714 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 14:04:00.190950 1010714 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 14:04:00.190954 1010714 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0729 14:04:00.190960 1010714 command_runner.go:130] > # default_runtime = "runc"
	I0729 14:04:00.190965 1010714 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0729 14:04:00.190971 1010714 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0729 14:04:00.190987 1010714 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0729 14:04:00.190996 1010714 command_runner.go:130] > # creation as a file is not desired either.
	I0729 14:04:00.191008 1010714 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0729 14:04:00.191022 1010714 command_runner.go:130] > # the hostname is being managed dynamically.
	I0729 14:04:00.191033 1010714 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0729 14:04:00.191037 1010714 command_runner.go:130] > # ]
	I0729 14:04:00.191049 1010714 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0729 14:04:00.191060 1010714 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0729 14:04:00.191071 1010714 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0729 14:04:00.191081 1010714 command_runner.go:130] > # Each entry in the table should follow the format:
	I0729 14:04:00.191089 1010714 command_runner.go:130] > #
	I0729 14:04:00.191096 1010714 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0729 14:04:00.191106 1010714 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0729 14:04:00.191137 1010714 command_runner.go:130] > # runtime_type = "oci"
	I0729 14:04:00.191144 1010714 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0729 14:04:00.191149 1010714 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0729 14:04:00.191154 1010714 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0729 14:04:00.191158 1010714 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0729 14:04:00.191167 1010714 command_runner.go:130] > # monitor_env = []
	I0729 14:04:00.191171 1010714 command_runner.go:130] > # privileged_without_host_devices = false
	I0729 14:04:00.191178 1010714 command_runner.go:130] > # allowed_annotations = []
	I0729 14:04:00.191184 1010714 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0729 14:04:00.191190 1010714 command_runner.go:130] > # Where:
	I0729 14:04:00.191194 1010714 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0729 14:04:00.191200 1010714 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0729 14:04:00.191209 1010714 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0729 14:04:00.191215 1010714 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0729 14:04:00.191220 1010714 command_runner.go:130] > #   in $PATH.
	I0729 14:04:00.191226 1010714 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0729 14:04:00.191233 1010714 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0729 14:04:00.191239 1010714 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0729 14:04:00.191244 1010714 command_runner.go:130] > #   state.
	I0729 14:04:00.191250 1010714 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0729 14:04:00.191258 1010714 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0729 14:04:00.191266 1010714 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0729 14:04:00.191271 1010714 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0729 14:04:00.191279 1010714 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0729 14:04:00.191286 1010714 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0729 14:04:00.191292 1010714 command_runner.go:130] > #   The currently recognized values are:
	I0729 14:04:00.191298 1010714 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0729 14:04:00.191306 1010714 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0729 14:04:00.191317 1010714 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0729 14:04:00.191325 1010714 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0729 14:04:00.191334 1010714 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0729 14:04:00.191343 1010714 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0729 14:04:00.191351 1010714 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0729 14:04:00.191360 1010714 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0729 14:04:00.191368 1010714 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0729 14:04:00.191374 1010714 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0729 14:04:00.191381 1010714 command_runner.go:130] > #   deprecated option "conmon".
	I0729 14:04:00.191387 1010714 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0729 14:04:00.191394 1010714 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0729 14:04:00.191402 1010714 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0729 14:04:00.191410 1010714 command_runner.go:130] > #   should be moved to the container's cgroup
	I0729 14:04:00.191416 1010714 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0729 14:04:00.191423 1010714 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0729 14:04:00.191429 1010714 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0729 14:04:00.191436 1010714 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0729 14:04:00.191439 1010714 command_runner.go:130] > #
	I0729 14:04:00.191443 1010714 command_runner.go:130] > # Using the seccomp notifier feature:
	I0729 14:04:00.191448 1010714 command_runner.go:130] > #
	I0729 14:04:00.191454 1010714 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0729 14:04:00.191462 1010714 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0729 14:04:00.191464 1010714 command_runner.go:130] > #
	I0729 14:04:00.191471 1010714 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0729 14:04:00.191478 1010714 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0729 14:04:00.191483 1010714 command_runner.go:130] > #
	I0729 14:04:00.191489 1010714 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0729 14:04:00.191494 1010714 command_runner.go:130] > # feature.
	I0729 14:04:00.191497 1010714 command_runner.go:130] > #
	I0729 14:04:00.191506 1010714 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0729 14:04:00.191514 1010714 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0729 14:04:00.191522 1010714 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0729 14:04:00.191530 1010714 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0729 14:04:00.191536 1010714 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0729 14:04:00.191541 1010714 command_runner.go:130] > #
	I0729 14:04:00.191551 1010714 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0729 14:04:00.191564 1010714 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0729 14:04:00.191569 1010714 command_runner.go:130] > #
	I0729 14:04:00.191575 1010714 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0729 14:04:00.191583 1010714 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0729 14:04:00.191588 1010714 command_runner.go:130] > #
	I0729 14:04:00.191596 1010714 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0729 14:04:00.191604 1010714 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0729 14:04:00.191608 1010714 command_runner.go:130] > # limitation.
	I0729 14:04:00.191612 1010714 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0729 14:04:00.191619 1010714 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0729 14:04:00.191623 1010714 command_runner.go:130] > runtime_type = "oci"
	I0729 14:04:00.191629 1010714 command_runner.go:130] > runtime_root = "/run/runc"
	I0729 14:04:00.191632 1010714 command_runner.go:130] > runtime_config_path = ""
	I0729 14:04:00.191637 1010714 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0729 14:04:00.191643 1010714 command_runner.go:130] > monitor_cgroup = "pod"
	I0729 14:04:00.191647 1010714 command_runner.go:130] > monitor_exec_cgroup = ""
	I0729 14:04:00.191653 1010714 command_runner.go:130] > monitor_env = [
	I0729 14:04:00.191658 1010714 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 14:04:00.191664 1010714 command_runner.go:130] > ]
	I0729 14:04:00.191670 1010714 command_runner.go:130] > privileged_without_host_devices = false
	I0729 14:04:00.191678 1010714 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0729 14:04:00.191685 1010714 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0729 14:04:00.191691 1010714 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0729 14:04:00.191700 1010714 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0729 14:04:00.191710 1010714 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0729 14:04:00.191718 1010714 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0729 14:04:00.191726 1010714 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0729 14:04:00.191735 1010714 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0729 14:04:00.191742 1010714 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0729 14:04:00.191749 1010714 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0729 14:04:00.191759 1010714 command_runner.go:130] > # Example:
	I0729 14:04:00.191763 1010714 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0729 14:04:00.191767 1010714 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0729 14:04:00.191772 1010714 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0729 14:04:00.191776 1010714 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0729 14:04:00.191780 1010714 command_runner.go:130] > # cpuset = 0
	I0729 14:04:00.191784 1010714 command_runner.go:130] > # cpushares = "0-1"
	I0729 14:04:00.191788 1010714 command_runner.go:130] > # Where:
	I0729 14:04:00.191794 1010714 command_runner.go:130] > # The workload name is workload-type.
	I0729 14:04:00.191801 1010714 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0729 14:04:00.191805 1010714 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0729 14:04:00.191810 1010714 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0729 14:04:00.191817 1010714 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0729 14:04:00.191823 1010714 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0729 14:04:00.191827 1010714 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0729 14:04:00.191833 1010714 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0729 14:04:00.191837 1010714 command_runner.go:130] > # Default value is set to true
	I0729 14:04:00.191841 1010714 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0729 14:04:00.191845 1010714 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0729 14:04:00.191850 1010714 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0729 14:04:00.191853 1010714 command_runner.go:130] > # Default value is set to 'false'
	I0729 14:04:00.191857 1010714 command_runner.go:130] > # disable_hostport_mapping = false
	I0729 14:04:00.191863 1010714 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0729 14:04:00.191866 1010714 command_runner.go:130] > #
	I0729 14:04:00.191871 1010714 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0729 14:04:00.191877 1010714 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0729 14:04:00.191882 1010714 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0729 14:04:00.191888 1010714 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0729 14:04:00.191892 1010714 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0729 14:04:00.191895 1010714 command_runner.go:130] > [crio.image]
	I0729 14:04:00.191901 1010714 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0729 14:04:00.191904 1010714 command_runner.go:130] > # default_transport = "docker://"
	I0729 14:04:00.191910 1010714 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0729 14:04:00.191915 1010714 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0729 14:04:00.191919 1010714 command_runner.go:130] > # global_auth_file = ""
	I0729 14:04:00.191923 1010714 command_runner.go:130] > # The image used to instantiate infra containers.
	I0729 14:04:00.191928 1010714 command_runner.go:130] > # This option supports live configuration reload.
	I0729 14:04:00.191932 1010714 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0729 14:04:00.191938 1010714 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0729 14:04:00.191944 1010714 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0729 14:04:00.191949 1010714 command_runner.go:130] > # This option supports live configuration reload.
	I0729 14:04:00.191955 1010714 command_runner.go:130] > # pause_image_auth_file = ""
	I0729 14:04:00.191961 1010714 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0729 14:04:00.191969 1010714 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0729 14:04:00.191979 1010714 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0729 14:04:00.191986 1010714 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0729 14:04:00.191992 1010714 command_runner.go:130] > # pause_command = "/pause"
	I0729 14:04:00.191998 1010714 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0729 14:04:00.192005 1010714 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0729 14:04:00.192011 1010714 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0729 14:04:00.192019 1010714 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0729 14:04:00.192025 1010714 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0729 14:04:00.192032 1010714 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0729 14:04:00.192038 1010714 command_runner.go:130] > # pinned_images = [
	I0729 14:04:00.192041 1010714 command_runner.go:130] > # ]
	I0729 14:04:00.192048 1010714 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0729 14:04:00.192056 1010714 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0729 14:04:00.192064 1010714 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0729 14:04:00.192070 1010714 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0729 14:04:00.192077 1010714 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0729 14:04:00.192084 1010714 command_runner.go:130] > # signature_policy = ""
	I0729 14:04:00.192089 1010714 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0729 14:04:00.192097 1010714 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0729 14:04:00.192104 1010714 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0729 14:04:00.192111 1010714 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0729 14:04:00.192119 1010714 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0729 14:04:00.192124 1010714 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0729 14:04:00.192131 1010714 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0729 14:04:00.192137 1010714 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0729 14:04:00.192143 1010714 command_runner.go:130] > # changing them here.
	I0729 14:04:00.192147 1010714 command_runner.go:130] > # insecure_registries = [
	I0729 14:04:00.192152 1010714 command_runner.go:130] > # ]
	I0729 14:04:00.192158 1010714 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0729 14:04:00.192165 1010714 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0729 14:04:00.192169 1010714 command_runner.go:130] > # image_volumes = "mkdir"
	I0729 14:04:00.192176 1010714 command_runner.go:130] > # Temporary directory to use for storing big files
	I0729 14:04:00.192180 1010714 command_runner.go:130] > # big_files_temporary_dir = ""
	I0729 14:04:00.192191 1010714 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0729 14:04:00.192197 1010714 command_runner.go:130] > # CNI plugins.
	I0729 14:04:00.192200 1010714 command_runner.go:130] > [crio.network]
	I0729 14:04:00.192205 1010714 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0729 14:04:00.192215 1010714 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0729 14:04:00.192221 1010714 command_runner.go:130] > # cni_default_network = ""
	I0729 14:04:00.192226 1010714 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0729 14:04:00.192233 1010714 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0729 14:04:00.192237 1010714 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0729 14:04:00.192244 1010714 command_runner.go:130] > # plugin_dirs = [
	I0729 14:04:00.192247 1010714 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0729 14:04:00.192250 1010714 command_runner.go:130] > # ]
	I0729 14:04:00.192256 1010714 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0729 14:04:00.192260 1010714 command_runner.go:130] > [crio.metrics]
	I0729 14:04:00.192265 1010714 command_runner.go:130] > # Globally enable or disable metrics support.
	I0729 14:04:00.192271 1010714 command_runner.go:130] > enable_metrics = true
	I0729 14:04:00.192275 1010714 command_runner.go:130] > # Specify enabled metrics collectors.
	I0729 14:04:00.192285 1010714 command_runner.go:130] > # Per default all metrics are enabled.
	I0729 14:04:00.192292 1010714 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0729 14:04:00.192298 1010714 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0729 14:04:00.192304 1010714 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0729 14:04:00.192310 1010714 command_runner.go:130] > # metrics_collectors = [
	I0729 14:04:00.192313 1010714 command_runner.go:130] > # 	"operations",
	I0729 14:04:00.192318 1010714 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0729 14:04:00.192324 1010714 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0729 14:04:00.192328 1010714 command_runner.go:130] > # 	"operations_errors",
	I0729 14:04:00.192333 1010714 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0729 14:04:00.192337 1010714 command_runner.go:130] > # 	"image_pulls_by_name",
	I0729 14:04:00.192344 1010714 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0729 14:04:00.192348 1010714 command_runner.go:130] > # 	"image_pulls_failures",
	I0729 14:04:00.192354 1010714 command_runner.go:130] > # 	"image_pulls_successes",
	I0729 14:04:00.192358 1010714 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0729 14:04:00.192364 1010714 command_runner.go:130] > # 	"image_layer_reuse",
	I0729 14:04:00.192368 1010714 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0729 14:04:00.192374 1010714 command_runner.go:130] > # 	"containers_oom_total",
	I0729 14:04:00.192378 1010714 command_runner.go:130] > # 	"containers_oom",
	I0729 14:04:00.192385 1010714 command_runner.go:130] > # 	"processes_defunct",
	I0729 14:04:00.192389 1010714 command_runner.go:130] > # 	"operations_total",
	I0729 14:04:00.192395 1010714 command_runner.go:130] > # 	"operations_latency_seconds",
	I0729 14:04:00.192401 1010714 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0729 14:04:00.192424 1010714 command_runner.go:130] > # 	"operations_errors_total",
	I0729 14:04:00.192435 1010714 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0729 14:04:00.192442 1010714 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0729 14:04:00.192451 1010714 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0729 14:04:00.192456 1010714 command_runner.go:130] > # 	"image_pulls_success_total",
	I0729 14:04:00.192462 1010714 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0729 14:04:00.192466 1010714 command_runner.go:130] > # 	"containers_oom_count_total",
	I0729 14:04:00.192473 1010714 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0729 14:04:00.192477 1010714 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0729 14:04:00.192486 1010714 command_runner.go:130] > # ]
	I0729 14:04:00.192493 1010714 command_runner.go:130] > # The port on which the metrics server will listen.
	I0729 14:04:00.192497 1010714 command_runner.go:130] > # metrics_port = 9090
	I0729 14:04:00.192505 1010714 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0729 14:04:00.192511 1010714 command_runner.go:130] > # metrics_socket = ""
	I0729 14:04:00.192516 1010714 command_runner.go:130] > # The certificate for the secure metrics server.
	I0729 14:04:00.192524 1010714 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0729 14:04:00.192530 1010714 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0729 14:04:00.192537 1010714 command_runner.go:130] > # certificate on any modification event.
	I0729 14:04:00.192541 1010714 command_runner.go:130] > # metrics_cert = ""
	I0729 14:04:00.192548 1010714 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0729 14:04:00.192553 1010714 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0729 14:04:00.192559 1010714 command_runner.go:130] > # metrics_key = ""
	I0729 14:04:00.192565 1010714 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0729 14:04:00.192571 1010714 command_runner.go:130] > [crio.tracing]
	I0729 14:04:00.192576 1010714 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0729 14:04:00.192584 1010714 command_runner.go:130] > # enable_tracing = false
	I0729 14:04:00.192591 1010714 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0729 14:04:00.192596 1010714 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0729 14:04:00.192604 1010714 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0729 14:04:00.192611 1010714 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0729 14:04:00.192615 1010714 command_runner.go:130] > # CRI-O NRI configuration.
	I0729 14:04:00.192619 1010714 command_runner.go:130] > [crio.nri]
	I0729 14:04:00.192625 1010714 command_runner.go:130] > # Globally enable or disable NRI.
	I0729 14:04:00.192631 1010714 command_runner.go:130] > # enable_nri = false
	I0729 14:04:00.192635 1010714 command_runner.go:130] > # NRI socket to listen on.
	I0729 14:04:00.192642 1010714 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0729 14:04:00.192646 1010714 command_runner.go:130] > # NRI plugin directory to use.
	I0729 14:04:00.192653 1010714 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0729 14:04:00.192658 1010714 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0729 14:04:00.192665 1010714 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0729 14:04:00.192670 1010714 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0729 14:04:00.192677 1010714 command_runner.go:130] > # nri_disable_connections = false
	I0729 14:04:00.192682 1010714 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0729 14:04:00.192689 1010714 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0729 14:04:00.192694 1010714 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0729 14:04:00.192700 1010714 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0729 14:04:00.192706 1010714 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0729 14:04:00.192711 1010714 command_runner.go:130] > [crio.stats]
	I0729 14:04:00.192717 1010714 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0729 14:04:00.192724 1010714 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0729 14:04:00.192729 1010714 command_runner.go:130] > # stats_collection_period = 0
	I0729 14:04:00.192756 1010714 command_runner.go:130] ! time="2024-07-29 14:04:00.143721054Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0729 14:04:00.192773 1010714 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0729 14:04:00.192916 1010714 cni.go:84] Creating CNI manager for ""
	I0729 14:04:00.192930 1010714 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 14:04:00.192942 1010714 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:04:00.192975 1010714 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.69 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-999945 NodeName:multinode-999945 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 14:04:00.193134 1010714 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-999945"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:04:00.193206 1010714 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 14:04:00.203802 1010714 command_runner.go:130] > kubeadm
	I0729 14:04:00.203824 1010714 command_runner.go:130] > kubectl
	I0729 14:04:00.203828 1010714 command_runner.go:130] > kubelet
	I0729 14:04:00.203859 1010714 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:04:00.203909 1010714 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:04:00.213929 1010714 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0729 14:04:00.230869 1010714 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 14:04:00.247645 1010714 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0729 14:04:00.264195 1010714 ssh_runner.go:195] Run: grep 192.168.39.69	control-plane.minikube.internal$ /etc/hosts
	I0729 14:04:00.268101 1010714 command_runner.go:130] > 192.168.39.69	control-plane.minikube.internal
	I0729 14:04:00.268187 1010714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:04:00.406769 1010714 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:04:00.422715 1010714 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/multinode-999945 for IP: 192.168.39.69
	I0729 14:04:00.422741 1010714 certs.go:194] generating shared ca certs ...
	I0729 14:04:00.422758 1010714 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:04:00.422961 1010714 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:04:00.423022 1010714 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:04:00.423037 1010714 certs.go:256] generating profile certs ...
	I0729 14:04:00.423150 1010714 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/multinode-999945/client.key
	I0729 14:04:00.423230 1010714 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/multinode-999945/apiserver.key.f8bd8e5a
	I0729 14:04:00.423352 1010714 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/multinode-999945/proxy-client.key
	I0729 14:04:00.423374 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 14:04:00.423396 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 14:04:00.423414 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 14:04:00.423430 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 14:04:00.423446 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/multinode-999945/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 14:04:00.423467 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/multinode-999945/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 14:04:00.423488 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/multinode-999945/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 14:04:00.423528 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/multinode-999945/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 14:04:00.423606 1010714 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:04:00.423658 1010714 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:04:00.423672 1010714 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:04:00.423702 1010714 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:04:00.423735 1010714 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:04:00.423763 1010714 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:04:00.423817 1010714 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:04:00.423889 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem -> /usr/share/ca-certificates/982046.pem
	I0729 14:04:00.423919 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> /usr/share/ca-certificates/9820462.pem
	I0729 14:04:00.423939 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:04:00.424633 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:04:00.449331 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:04:00.473140 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:04:00.496846 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:04:00.520630 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/multinode-999945/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 14:04:00.544044 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/multinode-999945/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 14:04:00.567002 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/multinode-999945/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:04:00.589776 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/multinode-999945/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 14:04:00.612946 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:04:00.636353 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:04:00.659951 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:04:00.682752 1010714 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:04:00.699133 1010714 ssh_runner.go:195] Run: openssl version
	I0729 14:04:00.705027 1010714 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0729 14:04:00.705193 1010714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:04:00.716114 1010714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:04:00.720996 1010714 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:04:00.721028 1010714 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:04:00.721063 1010714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:04:00.726623 1010714 command_runner.go:130] > 51391683
	I0729 14:04:00.726672 1010714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:04:00.736102 1010714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:04:00.747328 1010714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:04:00.751769 1010714 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:04:00.751803 1010714 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:04:00.751864 1010714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:04:00.757478 1010714 command_runner.go:130] > 3ec20f2e
	I0729 14:04:00.757547 1010714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:04:00.766878 1010714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:04:00.777798 1010714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:04:00.782185 1010714 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:04:00.782309 1010714 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:04:00.782363 1010714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:04:00.788010 1010714 command_runner.go:130] > b5213941
	I0729 14:04:00.788078 1010714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:04:00.797594 1010714 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:04:00.801866 1010714 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:04:00.801886 1010714 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0729 14:04:00.801891 1010714 command_runner.go:130] > Device: 253,1	Inode: 4197931     Links: 1
	I0729 14:04:00.801901 1010714 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 14:04:00.801910 1010714 command_runner.go:130] > Access: 2024-07-29 13:57:16.043615079 +0000
	I0729 14:04:00.801918 1010714 command_runner.go:130] > Modify: 2024-07-29 13:57:16.043615079 +0000
	I0729 14:04:00.801926 1010714 command_runner.go:130] > Change: 2024-07-29 13:57:16.043615079 +0000
	I0729 14:04:00.801936 1010714 command_runner.go:130] >  Birth: 2024-07-29 13:57:16.043615079 +0000
	I0729 14:04:00.802083 1010714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:04:00.807782 1010714 command_runner.go:130] > Certificate will not expire
	I0729 14:04:00.807864 1010714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:04:00.813298 1010714 command_runner.go:130] > Certificate will not expire
	I0729 14:04:00.813504 1010714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:04:00.819275 1010714 command_runner.go:130] > Certificate will not expire
	I0729 14:04:00.819552 1010714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:04:00.825084 1010714 command_runner.go:130] > Certificate will not expire
	I0729 14:04:00.825149 1010714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:04:00.830494 1010714 command_runner.go:130] > Certificate will not expire
	I0729 14:04:00.830658 1010714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:04:00.835933 1010714 command_runner.go:130] > Certificate will not expire
	I0729 14:04:00.836154 1010714 kubeadm.go:392] StartCluster: {Name:multinode-999945 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-999945 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.130 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.113 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:04:00.836304 1010714 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:04:00.836372 1010714 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:04:00.873019 1010714 command_runner.go:130] > 673291a0360a86e9ce27a8fb0c1488f3208c5b1ee6adc3f0333b5dd9e874fa03
	I0729 14:04:00.873041 1010714 command_runner.go:130] > bac92b71b7328bee70f714a308bbd8458aaf8df96506296f679ebce41f04aeb1
	I0729 14:04:00.873047 1010714 command_runner.go:130] > 59c0df3b19a4fbb788485f9a7433f0a56f9e3ef8785321f92b6742b431c4f1f0
	I0729 14:04:00.873053 1010714 command_runner.go:130] > 5897cf93acb9cea2fda1839684c4f12edfd20f2c53e557db0f2be0857f2b51ae
	I0729 14:04:00.873058 1010714 command_runner.go:130] > 0d786c480dd875d1ece431db9ee5238921fe265e114808a871b24281d37e07f9
	I0729 14:04:00.873063 1010714 command_runner.go:130] > b6f9a6db526f19329a886495225c588e7652d4037d74af31bb25dc7e71df226a
	I0729 14:04:00.873068 1010714 command_runner.go:130] > 8793a1ffb1b8951601b8a86ceecf1ba3258808adbd0292621591d7d6df9cda3a
	I0729 14:04:00.873074 1010714 command_runner.go:130] > 5de1847d7078b112d019dd0cb882d00e39d73aced580db855f936d8c1bf9eba9
	I0729 14:04:00.873092 1010714 cri.go:89] found id: "673291a0360a86e9ce27a8fb0c1488f3208c5b1ee6adc3f0333b5dd9e874fa03"
	I0729 14:04:00.873098 1010714 cri.go:89] found id: "bac92b71b7328bee70f714a308bbd8458aaf8df96506296f679ebce41f04aeb1"
	I0729 14:04:00.873102 1010714 cri.go:89] found id: "59c0df3b19a4fbb788485f9a7433f0a56f9e3ef8785321f92b6742b431c4f1f0"
	I0729 14:04:00.873105 1010714 cri.go:89] found id: "5897cf93acb9cea2fda1839684c4f12edfd20f2c53e557db0f2be0857f2b51ae"
	I0729 14:04:00.873107 1010714 cri.go:89] found id: "0d786c480dd875d1ece431db9ee5238921fe265e114808a871b24281d37e07f9"
	I0729 14:04:00.873111 1010714 cri.go:89] found id: "b6f9a6db526f19329a886495225c588e7652d4037d74af31bb25dc7e71df226a"
	I0729 14:04:00.873114 1010714 cri.go:89] found id: "8793a1ffb1b8951601b8a86ceecf1ba3258808adbd0292621591d7d6df9cda3a"
	I0729 14:04:00.873117 1010714 cri.go:89] found id: "5de1847d7078b112d019dd0cb882d00e39d73aced580db855f936d8c1bf9eba9"
	I0729 14:04:00.873120 1010714 cri.go:89] found id: ""
	I0729 14:04:00.873159 1010714 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 14:05:43 multinode-999945 crio[2851]: time="2024-07-29 14:05:43.283933815Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261943283906276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b381704-e36a-40cf-acf6-01bfc7007032 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:05:43 multinode-999945 crio[2851]: time="2024-07-29 14:05:43.284724145Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0802dbcc-716a-4bc7-b3f4-405dda05d9c2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:05:43 multinode-999945 crio[2851]: time="2024-07-29 14:05:43.284796096Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0802dbcc-716a-4bc7-b3f4-405dda05d9c2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:05:43 multinode-999945 crio[2851]: time="2024-07-29 14:05:43.285580952Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f6d636e1aea8bbaad8f6b3cc0d58a263ad94888138adf99321ac6b61ae0b2881,PodSandboxId:4d524146d248bf0bfbe2c84cc55afe99abd2a759e6645122bd6fb9ab641ca65e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722261880725502114,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cfbps,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43fabb3b-28df-4938-9d21-ab3d93cf1306,},Annotations:map[string]string{io.kubernetes.container.hash: a0cb0440,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e64607e004a7e42791b0b84ea6755935eab67c0d5a3a35786d38182fb119265,PodSandboxId:bb0504cc5d2292b3efea562425ae6b52b1614d96d3a41fe4dbff6990b04f0687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722261847113704184,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwhbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c736673-1a18-424e-b6f2-564730f5378a,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0c71f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c5e0236d7f68d78fd78f55bfe26f82d81dd2e04a72fd0ab7b8cd9c3cd3cddf,PodSandboxId:5da0160d771ed533e7e965c35a37aec87c005658c7b12421fe8c06230d672221,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722261847020717443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aeb8f386-5491-4271-8d95-19f1bd0cda53,},Annotations:map[string]string{io.kubernetes.container.hash: 72e1ea8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5909526fbedac88783dcdd44d167327d8b66dd7f71061b6fc65eb4ca85b54c,PodSandboxId:bb933458be8e89a5adcac9e1ce994885825935413140efb05c34cf95f40c03d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722261847053181907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-67wml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6e2780-35e8-4d82-8742-1ad45f71071a,},Annotations:map[string]string{io.kubernetes.container.hash: 32f9e042,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{
\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6567da46b50a844e0cdd358420b86bc9b98dd43dbd8a4a5ca1e466561e3d99f1,PodSandboxId:0daefb8dfb252256fc36ff466e68598feedcce373eb9d111de10e6f1697afc37,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722261847026630865,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cs48t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b81b754-2cfd-4fc9-ae72-f6c2efdf9796,},Annotations:map[string]string{io.ku
bernetes.container.hash: a5ba5322,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44edb91a2178ee2d2639cd7cd3fb28761d56b2ea41dcca8f2a0a27e78cd8b9d,PodSandboxId:7360f01f059aa74bdd9eaa2945894195a8e7c9fa7b51c9c0efc5380f64dc20f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722261843174915574,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e13418088b51ecca57f25f1e11293367,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a390b907d6854d6c4e0061661e8fb937a4f0b1fb1f717c74fd4e32eddcca8a69,PodSandboxId:8a2c1b0c32a3c855e932569a73bee843ffc1d5114c413258d98968733b456759,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722261843179692993,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08aac3f6b20c0cd36fd0badd541f987f,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05319a5585b937d3170f647cde7f87563c590fd62da79c9dc277b6c1fb0e45a4,PodSandboxId:148ec5d4cb38ca009ffad143c2b3f0970640707f0fbbd9be30f4b1328c153e69,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722261843136102046,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e577ab825fe54ad672236fa59e74a6e3,},Annotations:map[string]string{io.kubernetes.container.hash: 35f7a4ce,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee15e0cf532db32c9d562fa7725b8f8d2d2952cf04a265ec07f301714d817e2,PodSandboxId:f97677f35c33c6ebd44f000123a58de343578e750bf5d7aa7fa05004f970dcc4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722261843121767603,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0892d0efca377961238f900e9d91dfde,},Annotations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63b44f5b1f313073a03c94a584539e7b7c50ba3f490962a058b83d8587c99328,PodSandboxId:a61067b3c2d3233d868650b50c4edc1688a48b695f55384c83035ea06ccfda5c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722261526112822801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cfbps,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43fabb3b-28df-4938-9d21-ab3d93cf1306,},Annotations:map[string]string{io.kubernetes.container.hash: a0cb0440,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bac92b71b7328bee70f714a308bbd8458aaf8df96506296f679ebce41f04aeb1,PodSandboxId:e6da8569cf93eee3a36799778c1a3f85174ee8aefbc37b9455ba48123c511ea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722261473375862657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-67wml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6e2780-35e8-4d82-8742-1ad45f71071a,},Annotations:map[string]string{io.kubernetes.container.hash: 32f9e042,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:673291a0360a86e9ce27a8fb0c1488f3208c5b1ee6adc3f0333b5dd9e874fa03,PodSandboxId:c5ce6ef88b8fd3c6ffd80578fda318e772633c025c1c519c4087fddae258a380,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722261473376844788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: aeb8f386-5491-4271-8d95-19f1bd0cda53,},Annotations:map[string]string{io.kubernetes.container.hash: 72e1ea8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c0df3b19a4fbb788485f9a7433f0a56f9e3ef8785321f92b6742b431c4f1f0,PodSandboxId:affa2cf556c82f23fdc68e3563ba3be3e9de0da97985d30b6bb896b41d5d1430,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722261461450837773,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwhbt,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3c736673-1a18-424e-b6f2-564730f5378a,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0c71f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5897cf93acb9cea2fda1839684c4f12edfd20f2c53e557db0f2be0857f2b51ae,PodSandboxId:2ae0795b9aae115b9ec551ed1403ad19ec5382003137172c54b591b3a3f53466,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722261459327131087,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cs48t,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 9b81b754-2cfd-4fc9-ae72-f6c2efdf9796,},Annotations:map[string]string{io.kubernetes.container.hash: a5ba5322,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6f9a6db526f19329a886495225c588e7652d4037d74af31bb25dc7e71df226a,PodSandboxId:726b9bdc6ce0ee7ddecacc4fbf1f3e5a12190c9e62b1db2b84b633671b0bbd9a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722261439873228946,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e577ab825fe54ad672236fa59e74a6
e3,},Annotations:map[string]string{io.kubernetes.container.hash: 35f7a4ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d786c480dd875d1ece431db9ee5238921fe265e114808a871b24281d37e07f9,PodSandboxId:d32773a146357a0103a36007d27da5a11fbd8c4253b5a41db29d68f36811e6ec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722261439917826327,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e13418088b51ecca57f25f1e11293367,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8793a1ffb1b8951601b8a86ceecf1ba3258808adbd0292621591d7d6df9cda3a,PodSandboxId:13ad0064d67154fbb24ac98097855d62c6e1e1e1fedb7da5635796b97e8cbd2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722261439842926684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0892d0efca377961238f900e9d91dfde,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de1847d7078b112d019dd0cb882d00e39d73aced580db855f936d8c1bf9eba9,PodSandboxId:0a6da18c8cf516c6c2140913fcf4a1c304d8215e4c619dc40db84b9f776a4f89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722261439835285795,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08aac3f6b20c0cd36fd0badd541f987f,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0802dbcc-716a-4bc7-b3f4-405dda05d9c2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:05:43 multinode-999945 crio[2851]: time="2024-07-29 14:05:43.327364785Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=89b644ba-f1f8-476b-8691-dd713687d4d5 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:05:43 multinode-999945 crio[2851]: time="2024-07-29 14:05:43.327441968Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=89b644ba-f1f8-476b-8691-dd713687d4d5 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:05:43 multinode-999945 crio[2851]: time="2024-07-29 14:05:43.330577362Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=abd7110c-9291-4f2f-9869-b23cbe718de7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:05:43 multinode-999945 crio[2851]: time="2024-07-29 14:05:43.331092193Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261943330991742,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=abd7110c-9291-4f2f-9869-b23cbe718de7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:05:43 multinode-999945 crio[2851]: time="2024-07-29 14:05:43.331724503Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5baebc8e-8b54-4d19-b2b5-6f156ae237c4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:05:43 multinode-999945 crio[2851]: time="2024-07-29 14:05:43.331796351Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5baebc8e-8b54-4d19-b2b5-6f156ae237c4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:05:43 multinode-999945 crio[2851]: time="2024-07-29 14:05:43.332690831Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f6d636e1aea8bbaad8f6b3cc0d58a263ad94888138adf99321ac6b61ae0b2881,PodSandboxId:4d524146d248bf0bfbe2c84cc55afe99abd2a759e6645122bd6fb9ab641ca65e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722261880725502114,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cfbps,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43fabb3b-28df-4938-9d21-ab3d93cf1306,},Annotations:map[string]string{io.kubernetes.container.hash: a0cb0440,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e64607e004a7e42791b0b84ea6755935eab67c0d5a3a35786d38182fb119265,PodSandboxId:bb0504cc5d2292b3efea562425ae6b52b1614d96d3a41fe4dbff6990b04f0687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722261847113704184,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwhbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c736673-1a18-424e-b6f2-564730f5378a,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0c71f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c5e0236d7f68d78fd78f55bfe26f82d81dd2e04a72fd0ab7b8cd9c3cd3cddf,PodSandboxId:5da0160d771ed533e7e965c35a37aec87c005658c7b12421fe8c06230d672221,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722261847020717443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aeb8f386-5491-4271-8d95-19f1bd0cda53,},Annotations:map[string]string{io.kubernetes.container.hash: 72e1ea8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5909526fbedac88783dcdd44d167327d8b66dd7f71061b6fc65eb4ca85b54c,PodSandboxId:bb933458be8e89a5adcac9e1ce994885825935413140efb05c34cf95f40c03d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722261847053181907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-67wml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6e2780-35e8-4d82-8742-1ad45f71071a,},Annotations:map[string]string{io.kubernetes.container.hash: 32f9e042,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{
\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6567da46b50a844e0cdd358420b86bc9b98dd43dbd8a4a5ca1e466561e3d99f1,PodSandboxId:0daefb8dfb252256fc36ff466e68598feedcce373eb9d111de10e6f1697afc37,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722261847026630865,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cs48t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b81b754-2cfd-4fc9-ae72-f6c2efdf9796,},Annotations:map[string]string{io.ku
bernetes.container.hash: a5ba5322,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44edb91a2178ee2d2639cd7cd3fb28761d56b2ea41dcca8f2a0a27e78cd8b9d,PodSandboxId:7360f01f059aa74bdd9eaa2945894195a8e7c9fa7b51c9c0efc5380f64dc20f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722261843174915574,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e13418088b51ecca57f25f1e11293367,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a390b907d6854d6c4e0061661e8fb937a4f0b1fb1f717c74fd4e32eddcca8a69,PodSandboxId:8a2c1b0c32a3c855e932569a73bee843ffc1d5114c413258d98968733b456759,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722261843179692993,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08aac3f6b20c0cd36fd0badd541f987f,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05319a5585b937d3170f647cde7f87563c590fd62da79c9dc277b6c1fb0e45a4,PodSandboxId:148ec5d4cb38ca009ffad143c2b3f0970640707f0fbbd9be30f4b1328c153e69,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722261843136102046,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e577ab825fe54ad672236fa59e74a6e3,},Annotations:map[string]string{io.kubernetes.container.hash: 35f7a4ce,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee15e0cf532db32c9d562fa7725b8f8d2d2952cf04a265ec07f301714d817e2,PodSandboxId:f97677f35c33c6ebd44f000123a58de343578e750bf5d7aa7fa05004f970dcc4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722261843121767603,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0892d0efca377961238f900e9d91dfde,},Annotations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63b44f5b1f313073a03c94a584539e7b7c50ba3f490962a058b83d8587c99328,PodSandboxId:a61067b3c2d3233d868650b50c4edc1688a48b695f55384c83035ea06ccfda5c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722261526112822801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cfbps,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43fabb3b-28df-4938-9d21-ab3d93cf1306,},Annotations:map[string]string{io.kubernetes.container.hash: a0cb0440,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bac92b71b7328bee70f714a308bbd8458aaf8df96506296f679ebce41f04aeb1,PodSandboxId:e6da8569cf93eee3a36799778c1a3f85174ee8aefbc37b9455ba48123c511ea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722261473375862657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-67wml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6e2780-35e8-4d82-8742-1ad45f71071a,},Annotations:map[string]string{io.kubernetes.container.hash: 32f9e042,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:673291a0360a86e9ce27a8fb0c1488f3208c5b1ee6adc3f0333b5dd9e874fa03,PodSandboxId:c5ce6ef88b8fd3c6ffd80578fda318e772633c025c1c519c4087fddae258a380,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722261473376844788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: aeb8f386-5491-4271-8d95-19f1bd0cda53,},Annotations:map[string]string{io.kubernetes.container.hash: 72e1ea8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c0df3b19a4fbb788485f9a7433f0a56f9e3ef8785321f92b6742b431c4f1f0,PodSandboxId:affa2cf556c82f23fdc68e3563ba3be3e9de0da97985d30b6bb896b41d5d1430,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722261461450837773,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwhbt,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3c736673-1a18-424e-b6f2-564730f5378a,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0c71f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5897cf93acb9cea2fda1839684c4f12edfd20f2c53e557db0f2be0857f2b51ae,PodSandboxId:2ae0795b9aae115b9ec551ed1403ad19ec5382003137172c54b591b3a3f53466,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722261459327131087,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cs48t,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 9b81b754-2cfd-4fc9-ae72-f6c2efdf9796,},Annotations:map[string]string{io.kubernetes.container.hash: a5ba5322,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6f9a6db526f19329a886495225c588e7652d4037d74af31bb25dc7e71df226a,PodSandboxId:726b9bdc6ce0ee7ddecacc4fbf1f3e5a12190c9e62b1db2b84b633671b0bbd9a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722261439873228946,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e577ab825fe54ad672236fa59e74a6
e3,},Annotations:map[string]string{io.kubernetes.container.hash: 35f7a4ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d786c480dd875d1ece431db9ee5238921fe265e114808a871b24281d37e07f9,PodSandboxId:d32773a146357a0103a36007d27da5a11fbd8c4253b5a41db29d68f36811e6ec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722261439917826327,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e13418088b51ecca57f25f1e11293367,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8793a1ffb1b8951601b8a86ceecf1ba3258808adbd0292621591d7d6df9cda3a,PodSandboxId:13ad0064d67154fbb24ac98097855d62c6e1e1e1fedb7da5635796b97e8cbd2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722261439842926684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0892d0efca377961238f900e9d91dfde,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de1847d7078b112d019dd0cb882d00e39d73aced580db855f936d8c1bf9eba9,PodSandboxId:0a6da18c8cf516c6c2140913fcf4a1c304d8215e4c619dc40db84b9f776a4f89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722261439835285795,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08aac3f6b20c0cd36fd0badd541f987f,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5baebc8e-8b54-4d19-b2b5-6f156ae237c4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:05:43 multinode-999945 crio[2851]: time="2024-07-29 14:05:43.382642380Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c0bc3f12-d8e0-466a-8d5b-5362b2006c56 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:05:43 multinode-999945 crio[2851]: time="2024-07-29 14:05:43.382729770Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c0bc3f12-d8e0-466a-8d5b-5362b2006c56 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:05:43 multinode-999945 crio[2851]: time="2024-07-29 14:05:43.383616072Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4fbb1598-58c4-48b1-be4b-553d75309eb0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:05:43 multinode-999945 crio[2851]: time="2024-07-29 14:05:43.384191818Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261943384142745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4fbb1598-58c4-48b1-be4b-553d75309eb0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:05:43 multinode-999945 crio[2851]: time="2024-07-29 14:05:43.385232331Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=024c8997-e8eb-4340-aa18-9fbdcca33848 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:05:43 multinode-999945 crio[2851]: time="2024-07-29 14:05:43.385331482Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=024c8997-e8eb-4340-aa18-9fbdcca33848 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:05:43 multinode-999945 crio[2851]: time="2024-07-29 14:05:43.385825007Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f6d636e1aea8bbaad8f6b3cc0d58a263ad94888138adf99321ac6b61ae0b2881,PodSandboxId:4d524146d248bf0bfbe2c84cc55afe99abd2a759e6645122bd6fb9ab641ca65e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722261880725502114,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cfbps,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43fabb3b-28df-4938-9d21-ab3d93cf1306,},Annotations:map[string]string{io.kubernetes.container.hash: a0cb0440,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e64607e004a7e42791b0b84ea6755935eab67c0d5a3a35786d38182fb119265,PodSandboxId:bb0504cc5d2292b3efea562425ae6b52b1614d96d3a41fe4dbff6990b04f0687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722261847113704184,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwhbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c736673-1a18-424e-b6f2-564730f5378a,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0c71f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c5e0236d7f68d78fd78f55bfe26f82d81dd2e04a72fd0ab7b8cd9c3cd3cddf,PodSandboxId:5da0160d771ed533e7e965c35a37aec87c005658c7b12421fe8c06230d672221,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722261847020717443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aeb8f386-5491-4271-8d95-19f1bd0cda53,},Annotations:map[string]string{io.kubernetes.container.hash: 72e1ea8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5909526fbedac88783dcdd44d167327d8b66dd7f71061b6fc65eb4ca85b54c,PodSandboxId:bb933458be8e89a5adcac9e1ce994885825935413140efb05c34cf95f40c03d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722261847053181907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-67wml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6e2780-35e8-4d82-8742-1ad45f71071a,},Annotations:map[string]string{io.kubernetes.container.hash: 32f9e042,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{
\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6567da46b50a844e0cdd358420b86bc9b98dd43dbd8a4a5ca1e466561e3d99f1,PodSandboxId:0daefb8dfb252256fc36ff466e68598feedcce373eb9d111de10e6f1697afc37,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722261847026630865,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cs48t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b81b754-2cfd-4fc9-ae72-f6c2efdf9796,},Annotations:map[string]string{io.ku
bernetes.container.hash: a5ba5322,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44edb91a2178ee2d2639cd7cd3fb28761d56b2ea41dcca8f2a0a27e78cd8b9d,PodSandboxId:7360f01f059aa74bdd9eaa2945894195a8e7c9fa7b51c9c0efc5380f64dc20f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722261843174915574,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e13418088b51ecca57f25f1e11293367,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a390b907d6854d6c4e0061661e8fb937a4f0b1fb1f717c74fd4e32eddcca8a69,PodSandboxId:8a2c1b0c32a3c855e932569a73bee843ffc1d5114c413258d98968733b456759,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722261843179692993,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08aac3f6b20c0cd36fd0badd541f987f,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05319a5585b937d3170f647cde7f87563c590fd62da79c9dc277b6c1fb0e45a4,PodSandboxId:148ec5d4cb38ca009ffad143c2b3f0970640707f0fbbd9be30f4b1328c153e69,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722261843136102046,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e577ab825fe54ad672236fa59e74a6e3,},Annotations:map[string]string{io.kubernetes.container.hash: 35f7a4ce,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee15e0cf532db32c9d562fa7725b8f8d2d2952cf04a265ec07f301714d817e2,PodSandboxId:f97677f35c33c6ebd44f000123a58de343578e750bf5d7aa7fa05004f970dcc4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722261843121767603,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0892d0efca377961238f900e9d91dfde,},Annotations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63b44f5b1f313073a03c94a584539e7b7c50ba3f490962a058b83d8587c99328,PodSandboxId:a61067b3c2d3233d868650b50c4edc1688a48b695f55384c83035ea06ccfda5c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722261526112822801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cfbps,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43fabb3b-28df-4938-9d21-ab3d93cf1306,},Annotations:map[string]string{io.kubernetes.container.hash: a0cb0440,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bac92b71b7328bee70f714a308bbd8458aaf8df96506296f679ebce41f04aeb1,PodSandboxId:e6da8569cf93eee3a36799778c1a3f85174ee8aefbc37b9455ba48123c511ea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722261473375862657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-67wml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6e2780-35e8-4d82-8742-1ad45f71071a,},Annotations:map[string]string{io.kubernetes.container.hash: 32f9e042,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:673291a0360a86e9ce27a8fb0c1488f3208c5b1ee6adc3f0333b5dd9e874fa03,PodSandboxId:c5ce6ef88b8fd3c6ffd80578fda318e772633c025c1c519c4087fddae258a380,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722261473376844788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: aeb8f386-5491-4271-8d95-19f1bd0cda53,},Annotations:map[string]string{io.kubernetes.container.hash: 72e1ea8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c0df3b19a4fbb788485f9a7433f0a56f9e3ef8785321f92b6742b431c4f1f0,PodSandboxId:affa2cf556c82f23fdc68e3563ba3be3e9de0da97985d30b6bb896b41d5d1430,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722261461450837773,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwhbt,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3c736673-1a18-424e-b6f2-564730f5378a,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0c71f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5897cf93acb9cea2fda1839684c4f12edfd20f2c53e557db0f2be0857f2b51ae,PodSandboxId:2ae0795b9aae115b9ec551ed1403ad19ec5382003137172c54b591b3a3f53466,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722261459327131087,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cs48t,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 9b81b754-2cfd-4fc9-ae72-f6c2efdf9796,},Annotations:map[string]string{io.kubernetes.container.hash: a5ba5322,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6f9a6db526f19329a886495225c588e7652d4037d74af31bb25dc7e71df226a,PodSandboxId:726b9bdc6ce0ee7ddecacc4fbf1f3e5a12190c9e62b1db2b84b633671b0bbd9a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722261439873228946,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e577ab825fe54ad672236fa59e74a6
e3,},Annotations:map[string]string{io.kubernetes.container.hash: 35f7a4ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d786c480dd875d1ece431db9ee5238921fe265e114808a871b24281d37e07f9,PodSandboxId:d32773a146357a0103a36007d27da5a11fbd8c4253b5a41db29d68f36811e6ec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722261439917826327,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e13418088b51ecca57f25f1e11293367,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8793a1ffb1b8951601b8a86ceecf1ba3258808adbd0292621591d7d6df9cda3a,PodSandboxId:13ad0064d67154fbb24ac98097855d62c6e1e1e1fedb7da5635796b97e8cbd2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722261439842926684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0892d0efca377961238f900e9d91dfde,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de1847d7078b112d019dd0cb882d00e39d73aced580db855f936d8c1bf9eba9,PodSandboxId:0a6da18c8cf516c6c2140913fcf4a1c304d8215e4c619dc40db84b9f776a4f89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722261439835285795,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08aac3f6b20c0cd36fd0badd541f987f,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=024c8997-e8eb-4340-aa18-9fbdcca33848 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:05:43 multinode-999945 crio[2851]: time="2024-07-29 14:05:43.429360251Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=37e2ec87-da62-4dc8-a795-33704f28c369 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:05:43 multinode-999945 crio[2851]: time="2024-07-29 14:05:43.429432100Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=37e2ec87-da62-4dc8-a795-33704f28c369 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:05:43 multinode-999945 crio[2851]: time="2024-07-29 14:05:43.430841092Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=776a1f02-848f-458e-8047-b319610f63fc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:05:43 multinode-999945 crio[2851]: time="2024-07-29 14:05:43.431354956Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722261943431328884,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=776a1f02-848f-458e-8047-b319610f63fc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:05:43 multinode-999945 crio[2851]: time="2024-07-29 14:05:43.432035386Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8aae7ee7-c89c-4977-9374-26342e32d4d0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:05:43 multinode-999945 crio[2851]: time="2024-07-29 14:05:43.432114094Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8aae7ee7-c89c-4977-9374-26342e32d4d0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:05:43 multinode-999945 crio[2851]: time="2024-07-29 14:05:43.432662213Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f6d636e1aea8bbaad8f6b3cc0d58a263ad94888138adf99321ac6b61ae0b2881,PodSandboxId:4d524146d248bf0bfbe2c84cc55afe99abd2a759e6645122bd6fb9ab641ca65e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722261880725502114,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cfbps,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43fabb3b-28df-4938-9d21-ab3d93cf1306,},Annotations:map[string]string{io.kubernetes.container.hash: a0cb0440,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e64607e004a7e42791b0b84ea6755935eab67c0d5a3a35786d38182fb119265,PodSandboxId:bb0504cc5d2292b3efea562425ae6b52b1614d96d3a41fe4dbff6990b04f0687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722261847113704184,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwhbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c736673-1a18-424e-b6f2-564730f5378a,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0c71f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c5e0236d7f68d78fd78f55bfe26f82d81dd2e04a72fd0ab7b8cd9c3cd3cddf,PodSandboxId:5da0160d771ed533e7e965c35a37aec87c005658c7b12421fe8c06230d672221,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722261847020717443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aeb8f386-5491-4271-8d95-19f1bd0cda53,},Annotations:map[string]string{io.kubernetes.container.hash: 72e1ea8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5909526fbedac88783dcdd44d167327d8b66dd7f71061b6fc65eb4ca85b54c,PodSandboxId:bb933458be8e89a5adcac9e1ce994885825935413140efb05c34cf95f40c03d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722261847053181907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-67wml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6e2780-35e8-4d82-8742-1ad45f71071a,},Annotations:map[string]string{io.kubernetes.container.hash: 32f9e042,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{
\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6567da46b50a844e0cdd358420b86bc9b98dd43dbd8a4a5ca1e466561e3d99f1,PodSandboxId:0daefb8dfb252256fc36ff466e68598feedcce373eb9d111de10e6f1697afc37,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722261847026630865,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cs48t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b81b754-2cfd-4fc9-ae72-f6c2efdf9796,},Annotations:map[string]string{io.ku
bernetes.container.hash: a5ba5322,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44edb91a2178ee2d2639cd7cd3fb28761d56b2ea41dcca8f2a0a27e78cd8b9d,PodSandboxId:7360f01f059aa74bdd9eaa2945894195a8e7c9fa7b51c9c0efc5380f64dc20f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722261843174915574,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e13418088b51ecca57f25f1e11293367,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a390b907d6854d6c4e0061661e8fb937a4f0b1fb1f717c74fd4e32eddcca8a69,PodSandboxId:8a2c1b0c32a3c855e932569a73bee843ffc1d5114c413258d98968733b456759,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722261843179692993,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08aac3f6b20c0cd36fd0badd541f987f,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05319a5585b937d3170f647cde7f87563c590fd62da79c9dc277b6c1fb0e45a4,PodSandboxId:148ec5d4cb38ca009ffad143c2b3f0970640707f0fbbd9be30f4b1328c153e69,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722261843136102046,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e577ab825fe54ad672236fa59e74a6e3,},Annotations:map[string]string{io.kubernetes.container.hash: 35f7a4ce,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee15e0cf532db32c9d562fa7725b8f8d2d2952cf04a265ec07f301714d817e2,PodSandboxId:f97677f35c33c6ebd44f000123a58de343578e750bf5d7aa7fa05004f970dcc4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722261843121767603,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0892d0efca377961238f900e9d91dfde,},Annotations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63b44f5b1f313073a03c94a584539e7b7c50ba3f490962a058b83d8587c99328,PodSandboxId:a61067b3c2d3233d868650b50c4edc1688a48b695f55384c83035ea06ccfda5c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722261526112822801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cfbps,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43fabb3b-28df-4938-9d21-ab3d93cf1306,},Annotations:map[string]string{io.kubernetes.container.hash: a0cb0440,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bac92b71b7328bee70f714a308bbd8458aaf8df96506296f679ebce41f04aeb1,PodSandboxId:e6da8569cf93eee3a36799778c1a3f85174ee8aefbc37b9455ba48123c511ea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722261473375862657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-67wml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6e2780-35e8-4d82-8742-1ad45f71071a,},Annotations:map[string]string{io.kubernetes.container.hash: 32f9e042,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:673291a0360a86e9ce27a8fb0c1488f3208c5b1ee6adc3f0333b5dd9e874fa03,PodSandboxId:c5ce6ef88b8fd3c6ffd80578fda318e772633c025c1c519c4087fddae258a380,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722261473376844788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: aeb8f386-5491-4271-8d95-19f1bd0cda53,},Annotations:map[string]string{io.kubernetes.container.hash: 72e1ea8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c0df3b19a4fbb788485f9a7433f0a56f9e3ef8785321f92b6742b431c4f1f0,PodSandboxId:affa2cf556c82f23fdc68e3563ba3be3e9de0da97985d30b6bb896b41d5d1430,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722261461450837773,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwhbt,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3c736673-1a18-424e-b6f2-564730f5378a,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0c71f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5897cf93acb9cea2fda1839684c4f12edfd20f2c53e557db0f2be0857f2b51ae,PodSandboxId:2ae0795b9aae115b9ec551ed1403ad19ec5382003137172c54b591b3a3f53466,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722261459327131087,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cs48t,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 9b81b754-2cfd-4fc9-ae72-f6c2efdf9796,},Annotations:map[string]string{io.kubernetes.container.hash: a5ba5322,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6f9a6db526f19329a886495225c588e7652d4037d74af31bb25dc7e71df226a,PodSandboxId:726b9bdc6ce0ee7ddecacc4fbf1f3e5a12190c9e62b1db2b84b633671b0bbd9a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722261439873228946,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e577ab825fe54ad672236fa59e74a6
e3,},Annotations:map[string]string{io.kubernetes.container.hash: 35f7a4ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d786c480dd875d1ece431db9ee5238921fe265e114808a871b24281d37e07f9,PodSandboxId:d32773a146357a0103a36007d27da5a11fbd8c4253b5a41db29d68f36811e6ec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722261439917826327,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e13418088b51ecca57f25f1e11293367,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8793a1ffb1b8951601b8a86ceecf1ba3258808adbd0292621591d7d6df9cda3a,PodSandboxId:13ad0064d67154fbb24ac98097855d62c6e1e1e1fedb7da5635796b97e8cbd2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722261439842926684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0892d0efca377961238f900e9d91dfde,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de1847d7078b112d019dd0cb882d00e39d73aced580db855f936d8c1bf9eba9,PodSandboxId:0a6da18c8cf516c6c2140913fcf4a1c304d8215e4c619dc40db84b9f776a4f89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722261439835285795,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08aac3f6b20c0cd36fd0badd541f987f,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8aae7ee7-c89c-4977-9374-26342e32d4d0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f6d636e1aea8b       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   4d524146d248b       busybox-fc5497c4f-cfbps
	5e64607e004a7       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      About a minute ago   Running             kindnet-cni               1                   bb0504cc5d229       kindnet-lwhbt
	dd5909526fbed       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   bb933458be8e8       coredns-7db6d8ff4d-67wml
	6567da46b50a8       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   0daefb8dfb252       kube-proxy-cs48t
	46c5e0236d7f6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   5da0160d771ed       storage-provisioner
	a390b907d6854       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   8a2c1b0c32a3c       kube-controller-manager-multinode-999945
	b44edb91a2178       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   7360f01f059aa       kube-scheduler-multinode-999945
	05319a5585b93       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   148ec5d4cb38c       etcd-multinode-999945
	9ee15e0cf532d       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   f97677f35c33c       kube-apiserver-multinode-999945
	63b44f5b1f313       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   a61067b3c2d32       busybox-fc5497c4f-cfbps
	673291a0360a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   c5ce6ef88b8fd       storage-provisioner
	bac92b71b7328       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   e6da8569cf93e       coredns-7db6d8ff4d-67wml
	59c0df3b19a4f       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    8 minutes ago        Exited              kindnet-cni               0                   affa2cf556c82       kindnet-lwhbt
	5897cf93acb9c       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   2ae0795b9aae1       kube-proxy-cs48t
	0d786c480dd87       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   d32773a146357       kube-scheduler-multinode-999945
	b6f9a6db526f1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   726b9bdc6ce0e       etcd-multinode-999945
	8793a1ffb1b89       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   13ad0064d6715       kube-apiserver-multinode-999945
	5de1847d7078b       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   0a6da18c8cf51       kube-controller-manager-multinode-999945
	
	
	==> coredns [bac92b71b7328bee70f714a308bbd8458aaf8df96506296f679ebce41f04aeb1] <==
	[INFO] 10.244.1.2:41439 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001776374s
	[INFO] 10.244.1.2:40510 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130581s
	[INFO] 10.244.1.2:47791 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000095775s
	[INFO] 10.244.1.2:53005 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001249471s
	[INFO] 10.244.1.2:40177 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106319s
	[INFO] 10.244.1.2:38003 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000152364s
	[INFO] 10.244.1.2:34702 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061771s
	[INFO] 10.244.0.3:54854 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129225s
	[INFO] 10.244.0.3:57174 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145125s
	[INFO] 10.244.0.3:56250 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000050171s
	[INFO] 10.244.0.3:46830 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096578s
	[INFO] 10.244.1.2:33812 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135562s
	[INFO] 10.244.1.2:33198 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100874s
	[INFO] 10.244.1.2:43223 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111144s
	[INFO] 10.244.1.2:48727 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065837s
	[INFO] 10.244.0.3:55357 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000090527s
	[INFO] 10.244.0.3:52260 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000125398s
	[INFO] 10.244.0.3:59507 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000103114s
	[INFO] 10.244.0.3:43384 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00007954s
	[INFO] 10.244.1.2:34845 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120083s
	[INFO] 10.244.1.2:41200 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000077641s
	[INFO] 10.244.1.2:34736 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000215047s
	[INFO] 10.244.1.2:43686 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000160032s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [dd5909526fbedac88783dcdd44d167327d8b66dd7f71061b6fc65eb4ca85b54c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56654 - 38647 "HINFO IN 2625830199519297080.4413156860955595871. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013297454s
	
	
	==> describe nodes <==
	Name:               multinode-999945
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-999945
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=multinode-999945
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T13_57_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:57:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-999945
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 14:05:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 14:04:06 +0000   Mon, 29 Jul 2024 13:57:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 14:04:06 +0000   Mon, 29 Jul 2024 13:57:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 14:04:06 +0000   Mon, 29 Jul 2024 13:57:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 14:04:06 +0000   Mon, 29 Jul 2024 13:57:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.69
	  Hostname:    multinode-999945
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 eeec9bdcc1524e7da0bdaa5dbe13ee4f
	  System UUID:                eeec9bdc-c152-4e7d-a0bd-aa5dbe13ee4f
	  Boot ID:                    0863024b-4695-4eef-a6fe-b126a667817e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cfbps                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m59s
	  kube-system                 coredns-7db6d8ff4d-67wml                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m5s
	  kube-system                 etcd-multinode-999945                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m18s
	  kube-system                 kindnet-lwhbt                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m5s
	  kube-system                 kube-apiserver-multinode-999945             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m18s
	  kube-system                 kube-controller-manager-multinode-999945    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m18s
	  kube-system                 kube-proxy-cs48t                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m5s
	  kube-system                 kube-scheduler-multinode-999945             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m3s                 kube-proxy       
	  Normal  Starting                 96s                  kube-proxy       
	  Normal  Starting                 8m19s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     8m18s                kubelet          Node multinode-999945 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  8m18s                kubelet          Node multinode-999945 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m18s                kubelet          Node multinode-999945 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  8m18s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m5s                 node-controller  Node multinode-999945 event: Registered Node multinode-999945 in Controller
	  Normal  NodeReady                7m51s                kubelet          Node multinode-999945 status is now: NodeReady
	  Normal  Starting                 101s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  101s (x8 over 101s)  kubelet          Node multinode-999945 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s (x8 over 101s)  kubelet          Node multinode-999945 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     101s (x7 over 101s)  kubelet          Node multinode-999945 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  101s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           85s                  node-controller  Node multinode-999945 event: Registered Node multinode-999945 in Controller
	
	
	Name:               multinode-999945-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-999945-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=multinode-999945
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T14_04_46_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 14:04:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-999945-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 14:05:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 14:05:15 +0000   Mon, 29 Jul 2024 14:04:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 14:05:15 +0000   Mon, 29 Jul 2024 14:04:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 14:05:15 +0000   Mon, 29 Jul 2024 14:04:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 14:05:15 +0000   Mon, 29 Jul 2024 14:05:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.130
	  Hostname:    multinode-999945-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 94292164340a480f8ca8a62dd0a5c6d9
	  System UUID:                94292164-340a-480f-8ca8-a62dd0a5c6d9
	  Boot ID:                    b0b6710f-522e-4441-ab10-1ab5beb4c6cf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-r6skw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 kindnet-76rsw              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m20s
	  kube-system                 kube-proxy-bdwfd           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m14s                  kube-proxy  
	  Normal  Starting                 54s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m20s (x2 over 7m20s)  kubelet     Node multinode-999945-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m20s (x2 over 7m20s)  kubelet     Node multinode-999945-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m20s (x2 over 7m20s)  kubelet     Node multinode-999945-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m20s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m1s                   kubelet     Node multinode-999945-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  58s (x2 over 58s)      kubelet     Node multinode-999945-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x2 over 58s)      kubelet     Node multinode-999945-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x2 over 58s)      kubelet     Node multinode-999945-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  58s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                40s                    kubelet     Node multinode-999945-m02 status is now: NodeReady
	
	
	Name:               multinode-999945-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-999945-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=multinode-999945
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T14_05_23_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 14:05:23 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-999945-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 14:05:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 14:05:40 +0000   Mon, 29 Jul 2024 14:05:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 14:05:40 +0000   Mon, 29 Jul 2024 14:05:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 14:05:40 +0000   Mon, 29 Jul 2024 14:05:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 14:05:40 +0000   Mon, 29 Jul 2024 14:05:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.113
	  Hostname:    multinode-999945-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4539b502233a465c8c72dc94ecb27bb0
	  System UUID:                4539b502-233a-465c-8c72-dc94ecb27bb0
	  Boot ID:                    7925e904-42e9-472f-acf9-c0479253eb88
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-wc8pr       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m26s
	  kube-system                 kube-proxy-dpx6f    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m21s                  kube-proxy  
	  Normal  Starting                 16s                    kube-proxy  
	  Normal  Starting                 5m35s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m26s (x2 over 6m27s)  kubelet     Node multinode-999945-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m26s (x2 over 6m27s)  kubelet     Node multinode-999945-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m26s (x2 over 6m27s)  kubelet     Node multinode-999945-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m26s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m8s                   kubelet     Node multinode-999945-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m40s (x2 over 5m40s)  kubelet     Node multinode-999945-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m40s (x2 over 5m40s)  kubelet     Node multinode-999945-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m40s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m40s (x2 over 5m40s)  kubelet     Node multinode-999945-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m40s                  kubelet     Starting kubelet.
	  Normal  NodeReady                5m22s                  kubelet     Node multinode-999945-m03 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  21s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20s (x2 over 21s)      kubelet     Node multinode-999945-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x2 over 21s)      kubelet     Node multinode-999945-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x2 over 21s)      kubelet     Node multinode-999945-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3s                     kubelet     Node multinode-999945-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.055548] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053697] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.172607] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.129669] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.262305] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.142592] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +4.372244] systemd-fstab-generator[933]: Ignoring "noauto" option for root device
	[  +0.058325] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.979765] systemd-fstab-generator[1263]: Ignoring "noauto" option for root device
	[  +0.102938] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.228904] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.987218] systemd-fstab-generator[1462]: Ignoring "noauto" option for root device
	[ +14.287225] kauditd_printk_skb: 60 callbacks suppressed
	[Jul29 13:58] kauditd_printk_skb: 12 callbacks suppressed
	[Jul29 14:03] systemd-fstab-generator[2769]: Ignoring "noauto" option for root device
	[  +0.135871] systemd-fstab-generator[2781]: Ignoring "noauto" option for root device
	[  +0.161470] systemd-fstab-generator[2795]: Ignoring "noauto" option for root device
	[  +0.149360] systemd-fstab-generator[2807]: Ignoring "noauto" option for root device
	[  +0.278023] systemd-fstab-generator[2835]: Ignoring "noauto" option for root device
	[  +0.709644] systemd-fstab-generator[2934]: Ignoring "noauto" option for root device
	[Jul29 14:04] systemd-fstab-generator[3059]: Ignoring "noauto" option for root device
	[  +4.681650] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.841461] kauditd_printk_skb: 32 callbacks suppressed
	[  +2.631841] systemd-fstab-generator[3891]: Ignoring "noauto" option for root device
	[ +19.281162] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [05319a5585b937d3170f647cde7f87563c590fd62da79c9dc277b6c1fb0e45a4] <==
	{"level":"info","ts":"2024-07-29T14:04:03.629407Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T14:04:03.629458Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T14:04:03.629467Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T14:04:03.629702Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.69:2380"}
	{"level":"info","ts":"2024-07-29T14:04:03.629729Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.69:2380"}
	{"level":"info","ts":"2024-07-29T14:04:03.632858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b switched to configuration voters=(10491453631398908315)"}
	{"level":"info","ts":"2024-07-29T14:04:03.63294Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6c21f62219c1156b","local-member-id":"9199217ddd03919b","added-peer-id":"9199217ddd03919b","added-peer-peer-urls":["https://192.168.39.69:2380"]}
	{"level":"info","ts":"2024-07-29T14:04:03.63574Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6c21f62219c1156b","local-member-id":"9199217ddd03919b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:04:03.635787Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:04:04.941099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T14:04:04.941224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T14:04:04.941274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b received MsgPreVoteResp from 9199217ddd03919b at term 2"}
	{"level":"info","ts":"2024-07-29T14:04:04.94131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T14:04:04.941334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b received MsgVoteResp from 9199217ddd03919b at term 3"}
	{"level":"info","ts":"2024-07-29T14:04:04.941361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b became leader at term 3"}
	{"level":"info","ts":"2024-07-29T14:04:04.94139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9199217ddd03919b elected leader 9199217ddd03919b at term 3"}
	{"level":"info","ts":"2024-07-29T14:04:04.952593Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"9199217ddd03919b","local-member-attributes":"{Name:multinode-999945 ClientURLs:[https://192.168.39.69:2379]}","request-path":"/0/members/9199217ddd03919b/attributes","cluster-id":"6c21f62219c1156b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T14:04:04.953086Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T14:04:04.953186Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T14:04:04.953228Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T14:04:04.953261Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T14:04:04.960506Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T14:04:04.970849Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.69:2379"}
	{"level":"info","ts":"2024-07-29T14:04:49.465115Z","caller":"traceutil/trace.go:171","msg":"trace[1930564265] transaction","detail":"{read_only:false; response_revision:1030; number_of_response:1; }","duration":"188.512868ms","start":"2024-07-29T14:04:49.276575Z","end":"2024-07-29T14:04:49.465088Z","steps":["trace[1930564265] 'process raft request'  (duration: 188.352905ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T14:05:27.255884Z","caller":"traceutil/trace.go:171","msg":"trace[2131238968] transaction","detail":"{read_only:false; response_revision:1119; number_of_response:1; }","duration":"215.509076ms","start":"2024-07-29T14:05:27.04033Z","end":"2024-07-29T14:05:27.255839Z","steps":["trace[2131238968] 'process raft request'  (duration: 215.306774ms)"],"step_count":1}
	
	
	==> etcd [b6f9a6db526f19329a886495225c588e7652d4037d74af31bb25dc7e71df226a] <==
	{"level":"info","ts":"2024-07-29T13:57:20.530331Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T13:57:20.530733Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T13:57:20.530762Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T13:58:23.741798Z","caller":"traceutil/trace.go:171","msg":"trace[1892503512] transaction","detail":"{read_only:false; response_revision:441; number_of_response:1; }","duration":"233.248257ms","start":"2024-07-29T13:58:23.50851Z","end":"2024-07-29T13:58:23.741758Z","steps":["trace[1892503512] 'process raft request'  (duration: 227.598525ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T13:58:23.742079Z","caller":"traceutil/trace.go:171","msg":"trace[294383329] linearizableReadLoop","detail":"{readStateIndex:463; appliedIndex:462; }","duration":"224.416622ms","start":"2024-07-29T13:58:23.517642Z","end":"2024-07-29T13:58:23.742058Z","steps":["trace[294383329] 'read index received'  (duration: 218.467608ms)","trace[294383329] 'applied index is now lower than readState.Index'  (duration: 5.947668ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T13:58:23.74226Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.57129ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-999945-m02\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-29T13:58:23.742628Z","caller":"traceutil/trace.go:171","msg":"trace[275719880] range","detail":"{range_begin:/registry/minions/multinode-999945-m02; range_end:; response_count:1; response_revision:443; }","duration":"225.023717ms","start":"2024-07-29T13:58:23.517597Z","end":"2024-07-29T13:58:23.74262Z","steps":["trace[275719880] 'agreement among raft nodes before linearized reading'  (duration: 224.540679ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T13:58:23.742395Z","caller":"traceutil/trace.go:171","msg":"trace[92227388] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"169.614152ms","start":"2024-07-29T13:58:23.572774Z","end":"2024-07-29T13:58:23.742388Z","steps":["trace[92227388] 'process raft request'  (duration: 168.73098ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T13:58:28.76072Z","caller":"traceutil/trace.go:171","msg":"trace[486527720] transaction","detail":"{read_only:false; response_revision:475; number_of_response:1; }","duration":"180.021158ms","start":"2024-07-29T13:58:28.58068Z","end":"2024-07-29T13:58:28.760701Z","steps":["trace[486527720] 'process raft request'  (duration: 179.926899ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T13:58:28.781381Z","caller":"traceutil/trace.go:171","msg":"trace[1347007931] transaction","detail":"{read_only:false; response_revision:476; number_of_response:1; }","duration":"193.959422ms","start":"2024-07-29T13:58:28.587408Z","end":"2024-07-29T13:58:28.781367Z","steps":["trace[1347007931] 'process raft request'  (duration: 193.748753ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T13:59:17.209697Z","caller":"traceutil/trace.go:171","msg":"trace[556575040] linearizableReadLoop","detail":"{readStateIndex:612; appliedIndex:610; }","duration":"114.926355ms","start":"2024-07-29T13:59:17.094735Z","end":"2024-07-29T13:59:17.209661Z","steps":["trace[556575040] 'read index received'  (duration: 84.458398ms)","trace[556575040] 'applied index is now lower than readState.Index'  (duration: 30.467511ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T13:59:17.209867Z","caller":"traceutil/trace.go:171","msg":"trace[1374024925] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"164.190521ms","start":"2024-07-29T13:59:17.045665Z","end":"2024-07-29T13:59:17.209855Z","steps":["trace[1374024925] 'process raft request'  (duration: 163.963259ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:59:17.210094Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.319587ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-999945-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-29T13:59:17.210148Z","caller":"traceutil/trace.go:171","msg":"trace[1725965248] range","detail":"{range_begin:/registry/minions/multinode-999945-m03; range_end:; response_count:1; response_revision:577; }","duration":"115.424833ms","start":"2024-07-29T13:59:17.094711Z","end":"2024-07-29T13:59:17.210136Z","steps":["trace[1725965248] 'agreement among raft nodes before linearized reading'  (duration: 115.245334ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T13:59:17.210421Z","caller":"traceutil/trace.go:171","msg":"trace[1320330072] transaction","detail":"{read_only:false; response_revision:576; number_of_response:1; }","duration":"248.835244ms","start":"2024-07-29T13:59:16.961575Z","end":"2024-07-29T13:59:17.21041Z","steps":["trace[1320330072] 'process raft request'  (duration: 217.609584ms)","trace[1320330072] 'compare'  (duration: 30.303689ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T14:02:27.505937Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T14:02:27.506112Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-999945","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.69:2380"],"advertise-client-urls":["https://192.168.39.69:2379"]}
	{"level":"warn","ts":"2024-07-29T14:02:27.50627Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T14:02:27.50639Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T14:02:27.547865Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.69:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T14:02:27.547947Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.69:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T14:02:27.549426Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9199217ddd03919b","current-leader-member-id":"9199217ddd03919b"}
	{"level":"info","ts":"2024-07-29T14:02:27.552402Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.69:2380"}
	{"level":"info","ts":"2024-07-29T14:02:27.552575Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.69:2380"}
	{"level":"info","ts":"2024-07-29T14:02:27.552605Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-999945","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.69:2380"],"advertise-client-urls":["https://192.168.39.69:2379"]}
	
	
	==> kernel <==
	 14:05:43 up 8 min,  0 users,  load average: 0.67, 0.32, 0.14
	Linux multinode-999945 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [59c0df3b19a4fbb788485f9a7433f0a56f9e3ef8785321f92b6742b431c4f1f0] <==
	I0729 14:01:42.567392       1 main.go:322] Node multinode-999945-m03 has CIDR [10.244.3.0/24] 
	I0729 14:01:52.571686       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 14:01:52.571744       1 main.go:299] handling current node
	I0729 14:01:52.571760       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0729 14:01:52.571765       1 main.go:322] Node multinode-999945-m02 has CIDR [10.244.1.0/24] 
	I0729 14:01:52.571901       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 14:01:52.571925       1 main.go:322] Node multinode-999945-m03 has CIDR [10.244.3.0/24] 
	I0729 14:02:02.575349       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 14:02:02.575401       1 main.go:299] handling current node
	I0729 14:02:02.575420       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0729 14:02:02.575426       1 main.go:322] Node multinode-999945-m02 has CIDR [10.244.1.0/24] 
	I0729 14:02:02.575604       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 14:02:02.575627       1 main.go:322] Node multinode-999945-m03 has CIDR [10.244.3.0/24] 
	I0729 14:02:12.566838       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 14:02:12.566917       1 main.go:299] handling current node
	I0729 14:02:12.566935       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0729 14:02:12.566941       1 main.go:322] Node multinode-999945-m02 has CIDR [10.244.1.0/24] 
	I0729 14:02:12.567199       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 14:02:12.567234       1 main.go:322] Node multinode-999945-m03 has CIDR [10.244.3.0/24] 
	I0729 14:02:22.572181       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 14:02:22.572274       1 main.go:322] Node multinode-999945-m03 has CIDR [10.244.3.0/24] 
	I0729 14:02:22.572436       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 14:02:22.572463       1 main.go:299] handling current node
	I0729 14:02:22.572486       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0729 14:02:22.572501       1 main.go:322] Node multinode-999945-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [5e64607e004a7e42791b0b84ea6755935eab67c0d5a3a35786d38182fb119265] <==
	I0729 14:04:58.072096       1 main.go:322] Node multinode-999945-m03 has CIDR [10.244.3.0/24] 
	I0729 14:05:08.069788       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 14:05:08.069938       1 main.go:299] handling current node
	I0729 14:05:08.069980       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0729 14:05:08.070071       1 main.go:322] Node multinode-999945-m02 has CIDR [10.244.1.0/24] 
	I0729 14:05:08.070263       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 14:05:08.070286       1 main.go:322] Node multinode-999945-m03 has CIDR [10.244.3.0/24] 
	I0729 14:05:18.070925       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 14:05:18.071104       1 main.go:299] handling current node
	I0729 14:05:18.071133       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0729 14:05:18.071151       1 main.go:322] Node multinode-999945-m02 has CIDR [10.244.1.0/24] 
	I0729 14:05:18.071296       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 14:05:18.071357       1 main.go:322] Node multinode-999945-m03 has CIDR [10.244.3.0/24] 
	I0729 14:05:28.072533       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 14:05:28.072629       1 main.go:299] handling current node
	I0729 14:05:28.072643       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0729 14:05:28.072649       1 main.go:322] Node multinode-999945-m02 has CIDR [10.244.1.0/24] 
	I0729 14:05:28.073107       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 14:05:28.073217       1 main.go:322] Node multinode-999945-m03 has CIDR [10.244.2.0/24] 
	I0729 14:05:38.070427       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 14:05:38.070507       1 main.go:299] handling current node
	I0729 14:05:38.070521       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0729 14:05:38.070527       1 main.go:322] Node multinode-999945-m02 has CIDR [10.244.1.0/24] 
	I0729 14:05:38.070680       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 14:05:38.070717       1 main.go:322] Node multinode-999945-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [8793a1ffb1b8951601b8a86ceecf1ba3258808adbd0292621591d7d6df9cda3a] <==
	W0729 14:02:27.528624       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.528705       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.528755       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.528784       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.528811       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.528837       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.528864       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.528891       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.528920       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.528946       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.528975       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.529064       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.529104       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.529133       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.529161       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.529565       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.529596       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.529622       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.529831       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.529861       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.530308       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.530346       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.530443       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.530544       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.530625       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [9ee15e0cf532db32c9d562fa7725b8f8d2d2952cf04a265ec07f301714d817e2] <==
	I0729 14:04:06.429816       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 14:04:06.429936       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 14:04:06.491315       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 14:04:06.498105       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 14:04:06.498145       1 policy_source.go:224] refreshing policies
	I0729 14:04:06.527376       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 14:04:06.528618       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 14:04:06.528706       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 14:04:06.529151       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 14:04:06.529451       1 aggregator.go:165] initial CRD sync complete...
	I0729 14:04:06.529493       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 14:04:06.529516       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 14:04:06.529538       1 cache.go:39] Caches are synced for autoregister controller
	I0729 14:04:06.530077       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 14:04:06.544880       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 14:04:06.549625       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0729 14:04:06.568591       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 14:04:07.352456       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 14:04:08.354738       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 14:04:08.497727       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 14:04:08.521654       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 14:04:08.599682       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 14:04:08.607305       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 14:04:18.708581       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 14:04:18.733665       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [5de1847d7078b112d019dd0cb882d00e39d73aced580db855f936d8c1bf9eba9] <==
	I0729 13:58:23.792324       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-999945-m02" podCIDRs=["10.244.1.0/24"]
	I0729 13:58:28.578692       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-999945-m02"
	I0729 13:58:42.441382       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-999945-m02"
	I0729 13:58:44.824913       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.60669ms"
	I0729 13:58:44.835254       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.095696ms"
	I0729 13:58:44.835714       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="130.439µs"
	I0729 13:58:44.839676       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.436µs"
	I0729 13:58:44.844458       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.491µs"
	I0729 13:58:46.313413       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.335342ms"
	I0729 13:58:46.313551       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.378µs"
	I0729 13:58:46.588359       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.958301ms"
	I0729 13:58:46.588594       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.041µs"
	I0729 13:59:17.213957       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-999945-m02"
	I0729 13:59:17.214748       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-999945-m03\" does not exist"
	I0729 13:59:17.252884       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-999945-m03" podCIDRs=["10.244.2.0/24"]
	I0729 13:59:18.599954       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-999945-m03"
	I0729 13:59:35.396598       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-999945-m02"
	I0729 14:00:02.936085       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-999945-m02"
	I0729 14:00:03.851204       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-999945-m02"
	I0729 14:00:03.851310       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-999945-m03\" does not exist"
	I0729 14:00:03.858071       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-999945-m03" podCIDRs=["10.244.3.0/24"]
	I0729 14:00:21.767330       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-999945-m02"
	I0729 14:01:08.667889       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-999945-m02"
	I0729 14:01:08.729656       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.479687ms"
	I0729 14:01:08.730340       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.738µs"
	
	
	==> kube-controller-manager [a390b907d6854d6c4e0061661e8fb937a4f0b1fb1f717c74fd4e32eddcca8a69] <==
	I0729 14:04:19.395294       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 14:04:19.430351       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 14:04:19.430390       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 14:04:40.879914       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.749915ms"
	I0729 14:04:40.903464       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.50172ms"
	I0729 14:04:40.903569       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.346µs"
	I0729 14:04:45.226114       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-999945-m02\" does not exist"
	I0729 14:04:45.238383       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-999945-m02" podCIDRs=["10.244.1.0/24"]
	I0729 14:04:47.129624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.91µs"
	I0729 14:04:47.171330       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.089µs"
	I0729 14:04:47.178576       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.334µs"
	I0729 14:04:47.206568       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.998µs"
	I0729 14:04:47.215974       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.785µs"
	I0729 14:04:47.218924       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="95.218µs"
	I0729 14:04:49.469121       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="206.68µs"
	I0729 14:05:03.248454       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-999945-m02"
	I0729 14:05:03.270985       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.023µs"
	I0729 14:05:03.282663       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.297µs"
	I0729 14:05:05.783228       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.718176ms"
	I0729 14:05:05.783371       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.534µs"
	I0729 14:05:21.708369       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-999945-m02"
	I0729 14:05:23.069578       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-999945-m03\" does not exist"
	I0729 14:05:23.069807       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-999945-m02"
	I0729 14:05:23.080468       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-999945-m03" podCIDRs=["10.244.2.0/24"]
	I0729 14:05:40.475949       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-999945-m02"
	
	
	==> kube-proxy [5897cf93acb9cea2fda1839684c4f12edfd20f2c53e557db0f2be0857f2b51ae] <==
	I0729 13:57:39.827518       1 server_linux.go:69] "Using iptables proxy"
	I0729 13:57:39.841951       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.69"]
	I0729 13:57:39.888927       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 13:57:39.888964       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 13:57:39.888979       1 server_linux.go:165] "Using iptables Proxier"
	I0729 13:57:39.892399       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 13:57:39.892638       1 server.go:872] "Version info" version="v1.30.3"
	I0729 13:57:39.892681       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:57:39.894982       1 config.go:192] "Starting service config controller"
	I0729 13:57:39.895362       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 13:57:39.895427       1 config.go:101] "Starting endpoint slice config controller"
	I0729 13:57:39.895449       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 13:57:39.896675       1 config.go:319] "Starting node config controller"
	I0729 13:57:39.896712       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 13:57:39.996051       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 13:57:39.996105       1 shared_informer.go:320] Caches are synced for service config
	I0729 13:57:39.996800       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [6567da46b50a844e0cdd358420b86bc9b98dd43dbd8a4a5ca1e466561e3d99f1] <==
	I0729 14:04:07.267715       1 server_linux.go:69] "Using iptables proxy"
	I0729 14:04:07.287759       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.69"]
	I0729 14:04:07.352694       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 14:04:07.352755       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 14:04:07.352773       1 server_linux.go:165] "Using iptables Proxier"
	I0729 14:04:07.359409       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 14:04:07.359609       1 server.go:872] "Version info" version="v1.30.3"
	I0729 14:04:07.359638       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 14:04:07.361185       1 config.go:192] "Starting service config controller"
	I0729 14:04:07.361227       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 14:04:07.361254       1 config.go:101] "Starting endpoint slice config controller"
	I0729 14:04:07.361258       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 14:04:07.361748       1 config.go:319] "Starting node config controller"
	I0729 14:04:07.364481       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 14:04:07.462211       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 14:04:07.462279       1 shared_informer.go:320] Caches are synced for service config
	I0729 14:04:07.464716       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0d786c480dd875d1ece431db9ee5238921fe265e114808a871b24281d37e07f9] <==
	E0729 13:57:22.408916       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 13:57:22.407956       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 13:57:22.408156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 13:57:22.408163       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 13:57:23.231590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 13:57:23.231638       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 13:57:23.311669       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 13:57:23.311792       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 13:57:23.390328       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 13:57:23.390494       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 13:57:23.491767       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 13:57:23.491826       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 13:57:23.506218       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 13:57:23.506339       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 13:57:23.511151       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 13:57:23.511332       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 13:57:23.637810       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 13:57:23.638963       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 13:57:23.638775       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 13:57:23.639228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 13:57:23.951635       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 13:57:23.951754       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 13:57:27.101693       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 14:02:27.501839       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0729 14:02:27.502563       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b44edb91a2178ee2d2639cd7cd3fb28761d56b2ea41dcca8f2a0a27e78cd8b9d] <==
	I0729 14:04:04.662664       1 serving.go:380] Generated self-signed cert in-memory
	W0729 14:04:06.385699       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 14:04:06.385801       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 14:04:06.385826       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 14:04:06.385832       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 14:04:06.444850       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 14:04:06.445289       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 14:04:06.451811       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 14:04:06.452061       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 14:04:06.453732       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 14:04:06.453800       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 14:04:06.553148       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 14:04:03 multinode-999945 kubelet[3066]: E0729 14:04:03.310819    3066 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-999945&limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	Jul 29 14:04:03 multinode-999945 kubelet[3066]: I0729 14:04:03.954660    3066 kubelet_node_status.go:73] "Attempting to register node" node="multinode-999945"
	Jul 29 14:04:06 multinode-999945 kubelet[3066]: I0729 14:04:06.425834    3066 apiserver.go:52] "Watching apiserver"
	Jul 29 14:04:06 multinode-999945 kubelet[3066]: I0729 14:04:06.443970    3066 topology_manager.go:215] "Topology Admit Handler" podUID="db6e2780-35e8-4d82-8742-1ad45f71071a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-67wml"
	Jul 29 14:04:06 multinode-999945 kubelet[3066]: I0729 14:04:06.444400    3066 topology_manager.go:215] "Topology Admit Handler" podUID="3c736673-1a18-424e-b6f2-564730f5378a" podNamespace="kube-system" podName="kindnet-lwhbt"
	Jul 29 14:04:06 multinode-999945 kubelet[3066]: I0729 14:04:06.444640    3066 topology_manager.go:215] "Topology Admit Handler" podUID="9b81b754-2cfd-4fc9-ae72-f6c2efdf9796" podNamespace="kube-system" podName="kube-proxy-cs48t"
	Jul 29 14:04:06 multinode-999945 kubelet[3066]: I0729 14:04:06.444850    3066 topology_manager.go:215] "Topology Admit Handler" podUID="aeb8f386-5491-4271-8d95-19f1bd0cda53" podNamespace="kube-system" podName="storage-provisioner"
	Jul 29 14:04:06 multinode-999945 kubelet[3066]: I0729 14:04:06.444976    3066 topology_manager.go:215] "Topology Admit Handler" podUID="43fabb3b-28df-4938-9d21-ab3d93cf1306" podNamespace="default" podName="busybox-fc5497c4f-cfbps"
	Jul 29 14:04:06 multinode-999945 kubelet[3066]: I0729 14:04:06.446112    3066 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 29 14:04:06 multinode-999945 kubelet[3066]: I0729 14:04:06.524684    3066 kubelet_node_status.go:112] "Node was previously registered" node="multinode-999945"
	Jul 29 14:04:06 multinode-999945 kubelet[3066]: I0729 14:04:06.524885    3066 kubelet_node_status.go:76] "Successfully registered node" node="multinode-999945"
	Jul 29 14:04:06 multinode-999945 kubelet[3066]: I0729 14:04:06.526214    3066 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 14:04:06 multinode-999945 kubelet[3066]: I0729 14:04:06.527241    3066 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 14:04:06 multinode-999945 kubelet[3066]: I0729 14:04:06.541708    3066 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b81b754-2cfd-4fc9-ae72-f6c2efdf9796-lib-modules\") pod \"kube-proxy-cs48t\" (UID: \"9b81b754-2cfd-4fc9-ae72-f6c2efdf9796\") " pod="kube-system/kube-proxy-cs48t"
	Jul 29 14:04:06 multinode-999945 kubelet[3066]: I0729 14:04:06.541805    3066 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3c736673-1a18-424e-b6f2-564730f5378a-cni-cfg\") pod \"kindnet-lwhbt\" (UID: \"3c736673-1a18-424e-b6f2-564730f5378a\") " pod="kube-system/kindnet-lwhbt"
	Jul 29 14:04:06 multinode-999945 kubelet[3066]: I0729 14:04:06.541853    3066 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b81b754-2cfd-4fc9-ae72-f6c2efdf9796-xtables-lock\") pod \"kube-proxy-cs48t\" (UID: \"9b81b754-2cfd-4fc9-ae72-f6c2efdf9796\") " pod="kube-system/kube-proxy-cs48t"
	Jul 29 14:04:06 multinode-999945 kubelet[3066]: I0729 14:04:06.541910    3066 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c736673-1a18-424e-b6f2-564730f5378a-xtables-lock\") pod \"kindnet-lwhbt\" (UID: \"3c736673-1a18-424e-b6f2-564730f5378a\") " pod="kube-system/kindnet-lwhbt"
	Jul 29 14:04:06 multinode-999945 kubelet[3066]: I0729 14:04:06.541967    3066 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c736673-1a18-424e-b6f2-564730f5378a-lib-modules\") pod \"kindnet-lwhbt\" (UID: \"3c736673-1a18-424e-b6f2-564730f5378a\") " pod="kube-system/kindnet-lwhbt"
	Jul 29 14:04:06 multinode-999945 kubelet[3066]: I0729 14:04:06.542063    3066 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/aeb8f386-5491-4271-8d95-19f1bd0cda53-tmp\") pod \"storage-provisioner\" (UID: \"aeb8f386-5491-4271-8d95-19f1bd0cda53\") " pod="kube-system/storage-provisioner"
	Jul 29 14:04:09 multinode-999945 kubelet[3066]: I0729 14:04:09.914898    3066 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 29 14:05:02 multinode-999945 kubelet[3066]: E0729 14:05:02.524381    3066 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 14:05:02 multinode-999945 kubelet[3066]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 14:05:02 multinode-999945 kubelet[3066]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 14:05:02 multinode-999945 kubelet[3066]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 14:05:02 multinode-999945 kubelet[3066]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 14:05:43.008554 1011770 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19338-974764/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-999945 -n multinode-999945
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-999945 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (320.44s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 stop
E0729 14:07:06.665885  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-999945 stop: exit status 82 (2m0.473457357s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-999945-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-999945 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-999945 status: exit status 3 (18.67723518s)

                                                
                                                
-- stdout --
	multinode-999945
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-999945-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 14:08:06.084775 1012433 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.130:22: connect: no route to host
	E0729 14:08:06.084818 1012433 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.130:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-999945 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-999945 -n multinode-999945
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-999945 logs -n 25: (1.473852907s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-999945 ssh -n                                                                 | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | multinode-999945-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-999945 cp multinode-999945-m02:/home/docker/cp-test.txt                       | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | multinode-999945:/home/docker/cp-test_multinode-999945-m02_multinode-999945.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-999945 ssh -n                                                                 | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | multinode-999945-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-999945 ssh -n multinode-999945 sudo cat                                       | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | /home/docker/cp-test_multinode-999945-m02_multinode-999945.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-999945 cp multinode-999945-m02:/home/docker/cp-test.txt                       | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | multinode-999945-m03:/home/docker/cp-test_multinode-999945-m02_multinode-999945-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-999945 ssh -n                                                                 | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | multinode-999945-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-999945 ssh -n multinode-999945-m03 sudo cat                                   | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | /home/docker/cp-test_multinode-999945-m02_multinode-999945-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-999945 cp testdata/cp-test.txt                                                | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | multinode-999945-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-999945 ssh -n                                                                 | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | multinode-999945-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-999945 cp multinode-999945-m03:/home/docker/cp-test.txt                       | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2451309451/001/cp-test_multinode-999945-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-999945 ssh -n                                                                 | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | multinode-999945-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-999945 cp multinode-999945-m03:/home/docker/cp-test.txt                       | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | multinode-999945:/home/docker/cp-test_multinode-999945-m03_multinode-999945.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-999945 ssh -n                                                                 | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | multinode-999945-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-999945 ssh -n multinode-999945 sudo cat                                       | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | /home/docker/cp-test_multinode-999945-m03_multinode-999945.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-999945 cp multinode-999945-m03:/home/docker/cp-test.txt                       | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | multinode-999945-m02:/home/docker/cp-test_multinode-999945-m03_multinode-999945-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-999945 ssh -n                                                                 | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | multinode-999945-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-999945 ssh -n multinode-999945-m02 sudo cat                                   | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	|         | /home/docker/cp-test_multinode-999945-m03_multinode-999945-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-999945 node stop m03                                                          | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 13:59 UTC |
	| node    | multinode-999945 node start                                                             | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 13:59 UTC | 29 Jul 24 14:00 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-999945                                                                | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 14:00 UTC |                     |
	| stop    | -p multinode-999945                                                                     | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 14:00 UTC |                     |
	| start   | -p multinode-999945                                                                     | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 14:02 UTC | 29 Jul 24 14:05 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-999945                                                                | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 14:05 UTC |                     |
	| node    | multinode-999945 node delete                                                            | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 14:05 UTC | 29 Jul 24 14:05 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-999945 stop                                                                   | multinode-999945 | jenkins | v1.33.1 | 29 Jul 24 14:05 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 14:02:26
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 14:02:26.264359 1010714 out.go:291] Setting OutFile to fd 1 ...
	I0729 14:02:26.264503 1010714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:02:26.264514 1010714 out.go:304] Setting ErrFile to fd 2...
	I0729 14:02:26.264518 1010714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:02:26.264751 1010714 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 14:02:26.265372 1010714 out.go:298] Setting JSON to false
	I0729 14:02:26.266414 1010714 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13498,"bootTime":1722248248,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 14:02:26.266487 1010714 start.go:139] virtualization: kvm guest
	I0729 14:02:26.268688 1010714 out.go:177] * [multinode-999945] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 14:02:26.270337 1010714 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 14:02:26.270338 1010714 notify.go:220] Checking for updates...
	I0729 14:02:26.271895 1010714 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 14:02:26.273236 1010714 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:02:26.274472 1010714 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:02:26.275760 1010714 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 14:02:26.276959 1010714 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 14:02:26.278522 1010714 config.go:182] Loaded profile config "multinode-999945": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:02:26.278640 1010714 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 14:02:26.279125 1010714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:02:26.279222 1010714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:02:26.295397 1010714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35895
	I0729 14:02:26.295860 1010714 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:02:26.296629 1010714 main.go:141] libmachine: Using API Version  1
	I0729 14:02:26.296650 1010714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:02:26.297057 1010714 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:02:26.297221 1010714 main.go:141] libmachine: (multinode-999945) Calling .DriverName
	I0729 14:02:26.333206 1010714 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 14:02:26.334473 1010714 start.go:297] selected driver: kvm2
	I0729 14:02:26.334497 1010714 start.go:901] validating driver "kvm2" against &{Name:multinode-999945 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-999945 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.130 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.113 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:02:26.334712 1010714 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 14:02:26.335030 1010714 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:02:26.335100 1010714 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19338-974764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 14:02:26.350303 1010714 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 14:02:26.351006 1010714 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:02:26.351070 1010714 cni.go:84] Creating CNI manager for ""
	I0729 14:02:26.351082 1010714 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 14:02:26.351149 1010714 start.go:340] cluster config:
	{Name:multinode-999945 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-999945 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.130 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.113 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:02:26.351290 1010714 iso.go:125] acquiring lock: {Name:mk2bc72146110e230952d77b90cad2ea8182c9d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:02:26.353214 1010714 out.go:177] * Starting "multinode-999945" primary control-plane node in "multinode-999945" cluster
	I0729 14:02:26.354463 1010714 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 14:02:26.354509 1010714 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 14:02:26.354520 1010714 cache.go:56] Caching tarball of preloaded images
	I0729 14:02:26.354592 1010714 preload.go:172] Found /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 14:02:26.354602 1010714 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 14:02:26.354734 1010714 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/multinode-999945/config.json ...
	I0729 14:02:26.354931 1010714 start.go:360] acquireMachinesLock for multinode-999945: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 14:02:26.354973 1010714 start.go:364] duration metric: took 24.035µs to acquireMachinesLock for "multinode-999945"
	I0729 14:02:26.354987 1010714 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:02:26.355002 1010714 fix.go:54] fixHost starting: 
	I0729 14:02:26.355314 1010714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:02:26.355351 1010714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:02:26.369556 1010714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40955
	I0729 14:02:26.369960 1010714 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:02:26.370437 1010714 main.go:141] libmachine: Using API Version  1
	I0729 14:02:26.370471 1010714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:02:26.370796 1010714 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:02:26.370988 1010714 main.go:141] libmachine: (multinode-999945) Calling .DriverName
	I0729 14:02:26.371131 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetState
	I0729 14:02:26.372636 1010714 fix.go:112] recreateIfNeeded on multinode-999945: state=Running err=<nil>
	W0729 14:02:26.372657 1010714 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:02:26.375136 1010714 out.go:177] * Updating the running kvm2 "multinode-999945" VM ...
	I0729 14:02:26.376479 1010714 machine.go:94] provisionDockerMachine start ...
	I0729 14:02:26.376500 1010714 main.go:141] libmachine: (multinode-999945) Calling .DriverName
	I0729 14:02:26.376711 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHHostname
	I0729 14:02:26.379242 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:26.379610 1010714 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 14:02:26.379630 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:26.379794 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHPort
	I0729 14:02:26.379974 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:02:26.380099 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:02:26.380225 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHUsername
	I0729 14:02:26.380394 1010714 main.go:141] libmachine: Using SSH client type: native
	I0729 14:02:26.380684 1010714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 14:02:26.380700 1010714 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:02:26.498625 1010714 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-999945
	
	I0729 14:02:26.498661 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetMachineName
	I0729 14:02:26.498930 1010714 buildroot.go:166] provisioning hostname "multinode-999945"
	I0729 14:02:26.498957 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetMachineName
	I0729 14:02:26.499174 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHHostname
	I0729 14:02:26.502085 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:26.502508 1010714 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 14:02:26.502530 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:26.502670 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHPort
	I0729 14:02:26.502869 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:02:26.503019 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:02:26.503151 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHUsername
	I0729 14:02:26.503294 1010714 main.go:141] libmachine: Using SSH client type: native
	I0729 14:02:26.503478 1010714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 14:02:26.503490 1010714 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-999945 && echo "multinode-999945" | sudo tee /etc/hostname
	I0729 14:02:26.636620 1010714 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-999945
	
	I0729 14:02:26.636659 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHHostname
	I0729 14:02:26.639373 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:26.639721 1010714 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 14:02:26.639758 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:26.639948 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHPort
	I0729 14:02:26.640154 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:02:26.640343 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:02:26.640486 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHUsername
	I0729 14:02:26.640708 1010714 main.go:141] libmachine: Using SSH client type: native
	I0729 14:02:26.640910 1010714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 14:02:26.640928 1010714 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-999945' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-999945/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-999945' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:02:26.753868 1010714 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:02:26.753898 1010714 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:02:26.753923 1010714 buildroot.go:174] setting up certificates
	I0729 14:02:26.753937 1010714 provision.go:84] configureAuth start
	I0729 14:02:26.753963 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetMachineName
	I0729 14:02:26.754244 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetIP
	I0729 14:02:26.756944 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:26.757283 1010714 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 14:02:26.757318 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:26.757504 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHHostname
	I0729 14:02:26.759793 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:26.760174 1010714 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 14:02:26.760218 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:26.760365 1010714 provision.go:143] copyHostCerts
	I0729 14:02:26.760401 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:02:26.760466 1010714 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:02:26.760478 1010714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:02:26.760553 1010714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:02:26.760649 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:02:26.760671 1010714 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:02:26.760684 1010714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:02:26.760713 1010714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:02:26.760756 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:02:26.760771 1010714 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:02:26.760777 1010714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:02:26.760797 1010714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:02:26.760891 1010714 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.multinode-999945 san=[127.0.0.1 192.168.39.69 localhost minikube multinode-999945]
	I0729 14:02:27.214007 1010714 provision.go:177] copyRemoteCerts
	I0729 14:02:27.214102 1010714 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:02:27.214141 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHHostname
	I0729 14:02:27.216742 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:27.217102 1010714 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 14:02:27.217136 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:27.217313 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHPort
	I0729 14:02:27.217510 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:02:27.217693 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHUsername
	I0729 14:02:27.217806 1010714 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/multinode-999945/id_rsa Username:docker}
	I0729 14:02:27.304837 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 14:02:27.304928 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:02:27.330109 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 14:02:27.330193 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0729 14:02:27.357332 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 14:02:27.357408 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 14:02:27.381578 1010714 provision.go:87] duration metric: took 627.626897ms to configureAuth
	I0729 14:02:27.381603 1010714 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:02:27.381848 1010714 config.go:182] Loaded profile config "multinode-999945": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:02:27.381938 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHHostname
	I0729 14:02:27.384654 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:27.385029 1010714 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 14:02:27.385056 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:02:27.385190 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHPort
	I0729 14:02:27.385402 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:02:27.385603 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:02:27.385737 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHUsername
	I0729 14:02:27.385916 1010714 main.go:141] libmachine: Using SSH client type: native
	I0729 14:02:27.386074 1010714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 14:02:27.386089 1010714 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:03:58.240500 1010714 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:03:58.240555 1010714 machine.go:97] duration metric: took 1m31.864058512s to provisionDockerMachine
	I0729 14:03:58.240589 1010714 start.go:293] postStartSetup for "multinode-999945" (driver="kvm2")
	I0729 14:03:58.240608 1010714 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:03:58.240645 1010714 main.go:141] libmachine: (multinode-999945) Calling .DriverName
	I0729 14:03:58.241032 1010714 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:03:58.241076 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHHostname
	I0729 14:03:58.244042 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:03:58.244522 1010714 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 14:03:58.244547 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:03:58.244700 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHPort
	I0729 14:03:58.244909 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:03:58.245124 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHUsername
	I0729 14:03:58.245278 1010714 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/multinode-999945/id_rsa Username:docker}
	I0729 14:03:58.332453 1010714 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:03:58.336682 1010714 command_runner.go:130] > NAME=Buildroot
	I0729 14:03:58.336708 1010714 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0729 14:03:58.336714 1010714 command_runner.go:130] > ID=buildroot
	I0729 14:03:58.336721 1010714 command_runner.go:130] > VERSION_ID=2023.02.9
	I0729 14:03:58.336729 1010714 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0729 14:03:58.336772 1010714 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:03:58.336787 1010714 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:03:58.336871 1010714 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:03:58.336945 1010714 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:03:58.336954 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> /etc/ssl/certs/9820462.pem
	I0729 14:03:58.337043 1010714 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:03:58.346437 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:03:58.369887 1010714 start.go:296] duration metric: took 129.282129ms for postStartSetup
	I0729 14:03:58.369929 1010714 fix.go:56] duration metric: took 1m32.014932766s for fixHost
	I0729 14:03:58.369964 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHHostname
	I0729 14:03:58.372701 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:03:58.373045 1010714 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 14:03:58.373076 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:03:58.373219 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHPort
	I0729 14:03:58.373426 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:03:58.373616 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:03:58.373764 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHUsername
	I0729 14:03:58.373932 1010714 main.go:141] libmachine: Using SSH client type: native
	I0729 14:03:58.374095 1010714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 14:03:58.374106 1010714 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:03:58.480896 1010714 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722261838.452380819
	
	I0729 14:03:58.480930 1010714 fix.go:216] guest clock: 1722261838.452380819
	I0729 14:03:58.480943 1010714 fix.go:229] Guest: 2024-07-29 14:03:58.452380819 +0000 UTC Remote: 2024-07-29 14:03:58.369934649 +0000 UTC m=+92.145841276 (delta=82.44617ms)
	I0729 14:03:58.480974 1010714 fix.go:200] guest clock delta is within tolerance: 82.44617ms
	I0729 14:03:58.480983 1010714 start.go:83] releasing machines lock for "multinode-999945", held for 1m32.126000528s
	I0729 14:03:58.481016 1010714 main.go:141] libmachine: (multinode-999945) Calling .DriverName
	I0729 14:03:58.481286 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetIP
	I0729 14:03:58.483590 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:03:58.483966 1010714 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 14:03:58.483998 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:03:58.484101 1010714 main.go:141] libmachine: (multinode-999945) Calling .DriverName
	I0729 14:03:58.484649 1010714 main.go:141] libmachine: (multinode-999945) Calling .DriverName
	I0729 14:03:58.484820 1010714 main.go:141] libmachine: (multinode-999945) Calling .DriverName
	I0729 14:03:58.484933 1010714 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:03:58.484984 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHHostname
	I0729 14:03:58.485049 1010714 ssh_runner.go:195] Run: cat /version.json
	I0729 14:03:58.485074 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHHostname
	I0729 14:03:58.487436 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:03:58.487746 1010714 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 14:03:58.487773 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:03:58.487866 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:03:58.487909 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHPort
	I0729 14:03:58.488059 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:03:58.488212 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHUsername
	I0729 14:03:58.488386 1010714 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/multinode-999945/id_rsa Username:docker}
	I0729 14:03:58.488395 1010714 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 14:03:58.488446 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:03:58.488605 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHPort
	I0729 14:03:58.488783 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 14:03:58.488941 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetSSHUsername
	I0729 14:03:58.489106 1010714 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/multinode-999945/id_rsa Username:docker}
	I0729 14:03:58.583653 1010714 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0729 14:03:58.584180 1010714 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0729 14:03:58.584336 1010714 ssh_runner.go:195] Run: systemctl --version
	I0729 14:03:58.589970 1010714 command_runner.go:130] > systemd 252 (252)
	I0729 14:03:58.590002 1010714 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0729 14:03:58.590289 1010714 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:03:58.752732 1010714 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0729 14:03:58.758928 1010714 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0729 14:03:58.759014 1010714 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:03:58.759087 1010714 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:03:58.768850 1010714 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 14:03:58.768878 1010714 start.go:495] detecting cgroup driver to use...
	I0729 14:03:58.768954 1010714 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:03:58.785852 1010714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:03:58.799185 1010714 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:03:58.799248 1010714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:03:58.812998 1010714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:03:58.826563 1010714 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:03:58.965142 1010714 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:03:59.104459 1010714 docker.go:233] disabling docker service ...
	I0729 14:03:59.104540 1010714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:03:59.121095 1010714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:03:59.134818 1010714 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:03:59.272654 1010714 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:03:59.412442 1010714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:03:59.426844 1010714 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:03:59.445910 1010714 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0729 14:03:59.445970 1010714 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 14:03:59.446025 1010714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:03:59.457704 1010714 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:03:59.457783 1010714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:03:59.469084 1010714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:03:59.479843 1010714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:03:59.490545 1010714 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:03:59.501469 1010714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:03:59.512697 1010714 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:03:59.523067 1010714 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:03:59.534154 1010714 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:03:59.543908 1010714 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0729 14:03:59.543998 1010714 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:03:59.553776 1010714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:03:59.687977 1010714 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:03:59.927093 1010714 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:03:59.927166 1010714 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:03:59.932043 1010714 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0729 14:03:59.932064 1010714 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0729 14:03:59.932070 1010714 command_runner.go:130] > Device: 0,22	Inode: 1343        Links: 1
	I0729 14:03:59.932077 1010714 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 14:03:59.932082 1010714 command_runner.go:130] > Access: 2024-07-29 14:03:59.796578763 +0000
	I0729 14:03:59.932088 1010714 command_runner.go:130] > Modify: 2024-07-29 14:03:59.796578763 +0000
	I0729 14:03:59.932092 1010714 command_runner.go:130] > Change: 2024-07-29 14:03:59.796578763 +0000
	I0729 14:03:59.932096 1010714 command_runner.go:130] >  Birth: -
	I0729 14:03:59.932216 1010714 start.go:563] Will wait 60s for crictl version
	I0729 14:03:59.932285 1010714 ssh_runner.go:195] Run: which crictl
	I0729 14:03:59.936785 1010714 command_runner.go:130] > /usr/bin/crictl
	I0729 14:03:59.936866 1010714 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:03:59.978114 1010714 command_runner.go:130] > Version:  0.1.0
	I0729 14:03:59.978144 1010714 command_runner.go:130] > RuntimeName:  cri-o
	I0729 14:03:59.978150 1010714 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0729 14:03:59.978158 1010714 command_runner.go:130] > RuntimeApiVersion:  v1
	I0729 14:03:59.978239 1010714 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:03:59.978332 1010714 ssh_runner.go:195] Run: crio --version
	I0729 14:04:00.006746 1010714 command_runner.go:130] > crio version 1.29.1
	I0729 14:04:00.006773 1010714 command_runner.go:130] > Version:        1.29.1
	I0729 14:04:00.006780 1010714 command_runner.go:130] > GitCommit:      unknown
	I0729 14:04:00.006785 1010714 command_runner.go:130] > GitCommitDate:  unknown
	I0729 14:04:00.006789 1010714 command_runner.go:130] > GitTreeState:   clean
	I0729 14:04:00.006796 1010714 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 14:04:00.006800 1010714 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 14:04:00.006804 1010714 command_runner.go:130] > Compiler:       gc
	I0729 14:04:00.006808 1010714 command_runner.go:130] > Platform:       linux/amd64
	I0729 14:04:00.006812 1010714 command_runner.go:130] > Linkmode:       dynamic
	I0729 14:04:00.006839 1010714 command_runner.go:130] > BuildTags:      
	I0729 14:04:00.006848 1010714 command_runner.go:130] >   containers_image_ostree_stub
	I0729 14:04:00.006853 1010714 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 14:04:00.006863 1010714 command_runner.go:130] >   btrfs_noversion
	I0729 14:04:00.006867 1010714 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 14:04:00.006877 1010714 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 14:04:00.006883 1010714 command_runner.go:130] >   seccomp
	I0729 14:04:00.006888 1010714 command_runner.go:130] > LDFlags:          unknown
	I0729 14:04:00.006894 1010714 command_runner.go:130] > SeccompEnabled:   true
	I0729 14:04:00.006898 1010714 command_runner.go:130] > AppArmorEnabled:  false
	I0729 14:04:00.006972 1010714 ssh_runner.go:195] Run: crio --version
	I0729 14:04:00.035423 1010714 command_runner.go:130] > crio version 1.29.1
	I0729 14:04:00.035457 1010714 command_runner.go:130] > Version:        1.29.1
	I0729 14:04:00.035463 1010714 command_runner.go:130] > GitCommit:      unknown
	I0729 14:04:00.035467 1010714 command_runner.go:130] > GitCommitDate:  unknown
	I0729 14:04:00.035471 1010714 command_runner.go:130] > GitTreeState:   clean
	I0729 14:04:00.035477 1010714 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 14:04:00.035481 1010714 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 14:04:00.035485 1010714 command_runner.go:130] > Compiler:       gc
	I0729 14:04:00.035490 1010714 command_runner.go:130] > Platform:       linux/amd64
	I0729 14:04:00.035494 1010714 command_runner.go:130] > Linkmode:       dynamic
	I0729 14:04:00.035499 1010714 command_runner.go:130] > BuildTags:      
	I0729 14:04:00.035506 1010714 command_runner.go:130] >   containers_image_ostree_stub
	I0729 14:04:00.035510 1010714 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 14:04:00.035514 1010714 command_runner.go:130] >   btrfs_noversion
	I0729 14:04:00.035520 1010714 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 14:04:00.035524 1010714 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 14:04:00.035530 1010714 command_runner.go:130] >   seccomp
	I0729 14:04:00.035534 1010714 command_runner.go:130] > LDFlags:          unknown
	I0729 14:04:00.035538 1010714 command_runner.go:130] > SeccompEnabled:   true
	I0729 14:04:00.035541 1010714 command_runner.go:130] > AppArmorEnabled:  false
	I0729 14:04:00.038142 1010714 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 14:04:00.039362 1010714 main.go:141] libmachine: (multinode-999945) Calling .GetIP
	I0729 14:04:00.041797 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:04:00.042132 1010714 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 14:04:00.042153 1010714 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 14:04:00.042364 1010714 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 14:04:00.046537 1010714 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0729 14:04:00.046651 1010714 kubeadm.go:883] updating cluster {Name:multinode-999945 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-999945 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.130 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.113 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:04:00.046806 1010714 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 14:04:00.046857 1010714 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:04:00.098123 1010714 command_runner.go:130] > {
	I0729 14:04:00.098155 1010714 command_runner.go:130] >   "images": [
	I0729 14:04:00.098161 1010714 command_runner.go:130] >     {
	I0729 14:04:00.098172 1010714 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 14:04:00.098178 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.098186 1010714 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 14:04:00.098191 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.098196 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.098208 1010714 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 14:04:00.098222 1010714 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 14:04:00.098228 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.098236 1010714 command_runner.go:130] >       "size": "87165492",
	I0729 14:04:00.098254 1010714 command_runner.go:130] >       "uid": null,
	I0729 14:04:00.098263 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.098273 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.098282 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.098288 1010714 command_runner.go:130] >     },
	I0729 14:04:00.098296 1010714 command_runner.go:130] >     {
	I0729 14:04:00.098306 1010714 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 14:04:00.098315 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.098330 1010714 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 14:04:00.098339 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.098346 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.098359 1010714 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 14:04:00.098374 1010714 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 14:04:00.098383 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.098392 1010714 command_runner.go:130] >       "size": "87174707",
	I0729 14:04:00.098402 1010714 command_runner.go:130] >       "uid": null,
	I0729 14:04:00.098416 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.098425 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.098433 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.098441 1010714 command_runner.go:130] >     },
	I0729 14:04:00.098446 1010714 command_runner.go:130] >     {
	I0729 14:04:00.098460 1010714 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 14:04:00.098469 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.098480 1010714 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 14:04:00.098488 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.098496 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.098510 1010714 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 14:04:00.098521 1010714 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 14:04:00.098526 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.098537 1010714 command_runner.go:130] >       "size": "1363676",
	I0729 14:04:00.098546 1010714 command_runner.go:130] >       "uid": null,
	I0729 14:04:00.098553 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.098561 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.098569 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.098577 1010714 command_runner.go:130] >     },
	I0729 14:04:00.098584 1010714 command_runner.go:130] >     {
	I0729 14:04:00.098605 1010714 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 14:04:00.098615 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.098623 1010714 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 14:04:00.098628 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.098634 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.098648 1010714 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 14:04:00.098672 1010714 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 14:04:00.098680 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.098688 1010714 command_runner.go:130] >       "size": "31470524",
	I0729 14:04:00.098697 1010714 command_runner.go:130] >       "uid": null,
	I0729 14:04:00.098704 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.098713 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.098720 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.098727 1010714 command_runner.go:130] >     },
	I0729 14:04:00.098734 1010714 command_runner.go:130] >     {
	I0729 14:04:00.098747 1010714 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 14:04:00.098767 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.098781 1010714 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 14:04:00.098789 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.098797 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.098811 1010714 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 14:04:00.098826 1010714 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 14:04:00.098835 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.098846 1010714 command_runner.go:130] >       "size": "61245718",
	I0729 14:04:00.098854 1010714 command_runner.go:130] >       "uid": null,
	I0729 14:04:00.098862 1010714 command_runner.go:130] >       "username": "nonroot",
	I0729 14:04:00.098872 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.098880 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.098886 1010714 command_runner.go:130] >     },
	I0729 14:04:00.098894 1010714 command_runner.go:130] >     {
	I0729 14:04:00.098905 1010714 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 14:04:00.098914 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.098924 1010714 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 14:04:00.098932 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.098939 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.098953 1010714 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 14:04:00.098975 1010714 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 14:04:00.098983 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.098991 1010714 command_runner.go:130] >       "size": "150779692",
	I0729 14:04:00.099000 1010714 command_runner.go:130] >       "uid": {
	I0729 14:04:00.099007 1010714 command_runner.go:130] >         "value": "0"
	I0729 14:04:00.099016 1010714 command_runner.go:130] >       },
	I0729 14:04:00.099024 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.099033 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.099041 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.099050 1010714 command_runner.go:130] >     },
	I0729 14:04:00.099056 1010714 command_runner.go:130] >     {
	I0729 14:04:00.099069 1010714 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 14:04:00.099079 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.099089 1010714 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 14:04:00.099097 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.099104 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.099119 1010714 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 14:04:00.099134 1010714 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 14:04:00.099142 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.099150 1010714 command_runner.go:130] >       "size": "117609954",
	I0729 14:04:00.099158 1010714 command_runner.go:130] >       "uid": {
	I0729 14:04:00.099166 1010714 command_runner.go:130] >         "value": "0"
	I0729 14:04:00.099174 1010714 command_runner.go:130] >       },
	I0729 14:04:00.099180 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.099187 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.099197 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.099203 1010714 command_runner.go:130] >     },
	I0729 14:04:00.099212 1010714 command_runner.go:130] >     {
	I0729 14:04:00.099223 1010714 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 14:04:00.099232 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.099241 1010714 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 14:04:00.099249 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.099257 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.099288 1010714 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 14:04:00.099304 1010714 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 14:04:00.099313 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.099328 1010714 command_runner.go:130] >       "size": "112198984",
	I0729 14:04:00.099339 1010714 command_runner.go:130] >       "uid": {
	I0729 14:04:00.099345 1010714 command_runner.go:130] >         "value": "0"
	I0729 14:04:00.099350 1010714 command_runner.go:130] >       },
	I0729 14:04:00.099356 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.099362 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.099368 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.099373 1010714 command_runner.go:130] >     },
	I0729 14:04:00.099377 1010714 command_runner.go:130] >     {
	I0729 14:04:00.099384 1010714 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 14:04:00.099388 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.099392 1010714 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 14:04:00.099395 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.099399 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.099406 1010714 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 14:04:00.099412 1010714 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 14:04:00.099416 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.099419 1010714 command_runner.go:130] >       "size": "85953945",
	I0729 14:04:00.099423 1010714 command_runner.go:130] >       "uid": null,
	I0729 14:04:00.099426 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.099430 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.099433 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.099436 1010714 command_runner.go:130] >     },
	I0729 14:04:00.099439 1010714 command_runner.go:130] >     {
	I0729 14:04:00.099445 1010714 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 14:04:00.099449 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.099453 1010714 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 14:04:00.099456 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.099460 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.099467 1010714 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 14:04:00.099476 1010714 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 14:04:00.099479 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.099483 1010714 command_runner.go:130] >       "size": "63051080",
	I0729 14:04:00.099487 1010714 command_runner.go:130] >       "uid": {
	I0729 14:04:00.099491 1010714 command_runner.go:130] >         "value": "0"
	I0729 14:04:00.099494 1010714 command_runner.go:130] >       },
	I0729 14:04:00.099501 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.099507 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.099511 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.099514 1010714 command_runner.go:130] >     },
	I0729 14:04:00.099518 1010714 command_runner.go:130] >     {
	I0729 14:04:00.099526 1010714 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 14:04:00.099533 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.099537 1010714 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 14:04:00.099543 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.099548 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.099554 1010714 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 14:04:00.099562 1010714 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 14:04:00.099566 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.099572 1010714 command_runner.go:130] >       "size": "750414",
	I0729 14:04:00.099576 1010714 command_runner.go:130] >       "uid": {
	I0729 14:04:00.099581 1010714 command_runner.go:130] >         "value": "65535"
	I0729 14:04:00.099584 1010714 command_runner.go:130] >       },
	I0729 14:04:00.099590 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.099594 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.099597 1010714 command_runner.go:130] >       "pinned": true
	I0729 14:04:00.099601 1010714 command_runner.go:130] >     }
	I0729 14:04:00.099604 1010714 command_runner.go:130] >   ]
	I0729 14:04:00.099607 1010714 command_runner.go:130] > }
	I0729 14:04:00.099810 1010714 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 14:04:00.099822 1010714 crio.go:433] Images already preloaded, skipping extraction
	I0729 14:04:00.099885 1010714 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:04:00.137036 1010714 command_runner.go:130] > {
	I0729 14:04:00.137061 1010714 command_runner.go:130] >   "images": [
	I0729 14:04:00.137066 1010714 command_runner.go:130] >     {
	I0729 14:04:00.137074 1010714 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 14:04:00.137079 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.137085 1010714 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 14:04:00.137088 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.137093 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.137105 1010714 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 14:04:00.137129 1010714 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 14:04:00.137144 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.137154 1010714 command_runner.go:130] >       "size": "87165492",
	I0729 14:04:00.137167 1010714 command_runner.go:130] >       "uid": null,
	I0729 14:04:00.137175 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.137183 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.137189 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.137193 1010714 command_runner.go:130] >     },
	I0729 14:04:00.137199 1010714 command_runner.go:130] >     {
	I0729 14:04:00.137208 1010714 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 14:04:00.137218 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.137230 1010714 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 14:04:00.137236 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.137243 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.137258 1010714 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 14:04:00.137270 1010714 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 14:04:00.137276 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.137280 1010714 command_runner.go:130] >       "size": "87174707",
	I0729 14:04:00.137284 1010714 command_runner.go:130] >       "uid": null,
	I0729 14:04:00.137295 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.137304 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.137313 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.137321 1010714 command_runner.go:130] >     },
	I0729 14:04:00.137327 1010714 command_runner.go:130] >     {
	I0729 14:04:00.137339 1010714 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 14:04:00.137348 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.137356 1010714 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 14:04:00.137376 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.137386 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.137401 1010714 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 14:04:00.137416 1010714 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 14:04:00.137425 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.137432 1010714 command_runner.go:130] >       "size": "1363676",
	I0729 14:04:00.137441 1010714 command_runner.go:130] >       "uid": null,
	I0729 14:04:00.137448 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.137461 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.137473 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.137481 1010714 command_runner.go:130] >     },
	I0729 14:04:00.137489 1010714 command_runner.go:130] >     {
	I0729 14:04:00.137502 1010714 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 14:04:00.137508 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.137520 1010714 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 14:04:00.137528 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.137535 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.137550 1010714 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 14:04:00.137576 1010714 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 14:04:00.137584 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.137590 1010714 command_runner.go:130] >       "size": "31470524",
	I0729 14:04:00.137599 1010714 command_runner.go:130] >       "uid": null,
	I0729 14:04:00.137606 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.137616 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.137622 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.137630 1010714 command_runner.go:130] >     },
	I0729 14:04:00.137636 1010714 command_runner.go:130] >     {
	I0729 14:04:00.137649 1010714 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 14:04:00.137657 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.137665 1010714 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 14:04:00.137671 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.137675 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.137687 1010714 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 14:04:00.137701 1010714 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 14:04:00.137712 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.137722 1010714 command_runner.go:130] >       "size": "61245718",
	I0729 14:04:00.137728 1010714 command_runner.go:130] >       "uid": null,
	I0729 14:04:00.137738 1010714 command_runner.go:130] >       "username": "nonroot",
	I0729 14:04:00.137745 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.137754 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.137763 1010714 command_runner.go:130] >     },
	I0729 14:04:00.137770 1010714 command_runner.go:130] >     {
	I0729 14:04:00.137776 1010714 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 14:04:00.137784 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.137792 1010714 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 14:04:00.137812 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.137821 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.137835 1010714 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 14:04:00.137849 1010714 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 14:04:00.137857 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.137864 1010714 command_runner.go:130] >       "size": "150779692",
	I0729 14:04:00.137872 1010714 command_runner.go:130] >       "uid": {
	I0729 14:04:00.137878 1010714 command_runner.go:130] >         "value": "0"
	I0729 14:04:00.137886 1010714 command_runner.go:130] >       },
	I0729 14:04:00.137896 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.137905 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.137915 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.137923 1010714 command_runner.go:130] >     },
	I0729 14:04:00.137931 1010714 command_runner.go:130] >     {
	I0729 14:04:00.137941 1010714 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 14:04:00.137982 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.138006 1010714 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 14:04:00.138012 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.138022 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.138036 1010714 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 14:04:00.138050 1010714 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 14:04:00.138059 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.138066 1010714 command_runner.go:130] >       "size": "117609954",
	I0729 14:04:00.138075 1010714 command_runner.go:130] >       "uid": {
	I0729 14:04:00.138082 1010714 command_runner.go:130] >         "value": "0"
	I0729 14:04:00.138086 1010714 command_runner.go:130] >       },
	I0729 14:04:00.138095 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.138105 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.138113 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.138121 1010714 command_runner.go:130] >     },
	I0729 14:04:00.138127 1010714 command_runner.go:130] >     {
	I0729 14:04:00.138141 1010714 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 14:04:00.138151 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.138162 1010714 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 14:04:00.138170 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.138178 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.138214 1010714 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 14:04:00.138230 1010714 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 14:04:00.138239 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.138246 1010714 command_runner.go:130] >       "size": "112198984",
	I0729 14:04:00.138254 1010714 command_runner.go:130] >       "uid": {
	I0729 14:04:00.138261 1010714 command_runner.go:130] >         "value": "0"
	I0729 14:04:00.138268 1010714 command_runner.go:130] >       },
	I0729 14:04:00.138275 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.138284 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.138290 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.138295 1010714 command_runner.go:130] >     },
	I0729 14:04:00.138298 1010714 command_runner.go:130] >     {
	I0729 14:04:00.138309 1010714 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 14:04:00.138318 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.138326 1010714 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 14:04:00.138335 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.138342 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.138355 1010714 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 14:04:00.138372 1010714 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 14:04:00.138380 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.138387 1010714 command_runner.go:130] >       "size": "85953945",
	I0729 14:04:00.138392 1010714 command_runner.go:130] >       "uid": null,
	I0729 14:04:00.138401 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.138411 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.138417 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.138426 1010714 command_runner.go:130] >     },
	I0729 14:04:00.138434 1010714 command_runner.go:130] >     {
	I0729 14:04:00.138444 1010714 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 14:04:00.138453 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.138465 1010714 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 14:04:00.138473 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.138481 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.138489 1010714 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 14:04:00.138503 1010714 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 14:04:00.138512 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.138518 1010714 command_runner.go:130] >       "size": "63051080",
	I0729 14:04:00.138529 1010714 command_runner.go:130] >       "uid": {
	I0729 14:04:00.138537 1010714 command_runner.go:130] >         "value": "0"
	I0729 14:04:00.138546 1010714 command_runner.go:130] >       },
	I0729 14:04:00.138555 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.138564 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.138572 1010714 command_runner.go:130] >       "pinned": false
	I0729 14:04:00.138579 1010714 command_runner.go:130] >     },
	I0729 14:04:00.138583 1010714 command_runner.go:130] >     {
	I0729 14:04:00.138589 1010714 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 14:04:00.138599 1010714 command_runner.go:130] >       "repoTags": [
	I0729 14:04:00.138610 1010714 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 14:04:00.138615 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.138625 1010714 command_runner.go:130] >       "repoDigests": [
	I0729 14:04:00.138639 1010714 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 14:04:00.138653 1010714 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 14:04:00.138661 1010714 command_runner.go:130] >       ],
	I0729 14:04:00.138671 1010714 command_runner.go:130] >       "size": "750414",
	I0729 14:04:00.138678 1010714 command_runner.go:130] >       "uid": {
	I0729 14:04:00.138682 1010714 command_runner.go:130] >         "value": "65535"
	I0729 14:04:00.138689 1010714 command_runner.go:130] >       },
	I0729 14:04:00.138696 1010714 command_runner.go:130] >       "username": "",
	I0729 14:04:00.138705 1010714 command_runner.go:130] >       "spec": null,
	I0729 14:04:00.138711 1010714 command_runner.go:130] >       "pinned": true
	I0729 14:04:00.138720 1010714 command_runner.go:130] >     }
	I0729 14:04:00.138726 1010714 command_runner.go:130] >   ]
	I0729 14:04:00.138734 1010714 command_runner.go:130] > }
	I0729 14:04:00.138897 1010714 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 14:04:00.138913 1010714 cache_images.go:84] Images are preloaded, skipping loading
	I0729 14:04:00.138921 1010714 kubeadm.go:934] updating node { 192.168.39.69 8443 v1.30.3 crio true true} ...
	I0729 14:04:00.139077 1010714 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-999945 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-999945 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:04:00.139166 1010714 ssh_runner.go:195] Run: crio config
	I0729 14:04:00.180804 1010714 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0729 14:04:00.180836 1010714 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0729 14:04:00.180843 1010714 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0729 14:04:00.180846 1010714 command_runner.go:130] > #
	I0729 14:04:00.180864 1010714 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0729 14:04:00.180870 1010714 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0729 14:04:00.180876 1010714 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0729 14:04:00.180892 1010714 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0729 14:04:00.180899 1010714 command_runner.go:130] > # reload'.
	I0729 14:04:00.180907 1010714 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0729 14:04:00.180926 1010714 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0729 14:04:00.180942 1010714 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0729 14:04:00.180950 1010714 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0729 14:04:00.180955 1010714 command_runner.go:130] > [crio]
	I0729 14:04:00.180964 1010714 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0729 14:04:00.180969 1010714 command_runner.go:130] > # containers images, in this directory.
	I0729 14:04:00.180975 1010714 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0729 14:04:00.180984 1010714 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0729 14:04:00.181109 1010714 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0729 14:04:00.181139 1010714 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0729 14:04:00.181499 1010714 command_runner.go:130] > # imagestore = ""
	I0729 14:04:00.181521 1010714 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0729 14:04:00.181532 1010714 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0729 14:04:00.181643 1010714 command_runner.go:130] > storage_driver = "overlay"
	I0729 14:04:00.181662 1010714 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0729 14:04:00.181673 1010714 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0729 14:04:00.181680 1010714 command_runner.go:130] > storage_option = [
	I0729 14:04:00.181776 1010714 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0729 14:04:00.181873 1010714 command_runner.go:130] > ]
	I0729 14:04:00.181894 1010714 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0729 14:04:00.181916 1010714 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0729 14:04:00.182117 1010714 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0729 14:04:00.182131 1010714 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0729 14:04:00.182141 1010714 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0729 14:04:00.182148 1010714 command_runner.go:130] > # always happen on a node reboot
	I0729 14:04:00.182462 1010714 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0729 14:04:00.182485 1010714 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0729 14:04:00.182495 1010714 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0729 14:04:00.182506 1010714 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0729 14:04:00.182624 1010714 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0729 14:04:00.182640 1010714 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0729 14:04:00.182652 1010714 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0729 14:04:00.182907 1010714 command_runner.go:130] > # internal_wipe = true
	I0729 14:04:00.182922 1010714 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0729 14:04:00.182929 1010714 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0729 14:04:00.183180 1010714 command_runner.go:130] > # internal_repair = false
	I0729 14:04:00.183190 1010714 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0729 14:04:00.183196 1010714 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0729 14:04:00.183202 1010714 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0729 14:04:00.183535 1010714 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0729 14:04:00.183547 1010714 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0729 14:04:00.183550 1010714 command_runner.go:130] > [crio.api]
	I0729 14:04:00.183556 1010714 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0729 14:04:00.184018 1010714 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0729 14:04:00.184028 1010714 command_runner.go:130] > # IP address on which the stream server will listen.
	I0729 14:04:00.184464 1010714 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0729 14:04:00.184484 1010714 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0729 14:04:00.184493 1010714 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0729 14:04:00.184790 1010714 command_runner.go:130] > # stream_port = "0"
	I0729 14:04:00.184800 1010714 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0729 14:04:00.185093 1010714 command_runner.go:130] > # stream_enable_tls = false
	I0729 14:04:00.185103 1010714 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0729 14:04:00.185422 1010714 command_runner.go:130] > # stream_idle_timeout = ""
	I0729 14:04:00.185442 1010714 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0729 14:04:00.185452 1010714 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0729 14:04:00.185461 1010714 command_runner.go:130] > # minutes.
	I0729 14:04:00.185669 1010714 command_runner.go:130] > # stream_tls_cert = ""
	I0729 14:04:00.185695 1010714 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0729 14:04:00.185705 1010714 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0729 14:04:00.185894 1010714 command_runner.go:130] > # stream_tls_key = ""
	I0729 14:04:00.185906 1010714 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0729 14:04:00.185915 1010714 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0729 14:04:00.185937 1010714 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0729 14:04:00.186099 1010714 command_runner.go:130] > # stream_tls_ca = ""
	I0729 14:04:00.186111 1010714 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 14:04:00.186266 1010714 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0729 14:04:00.186283 1010714 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 14:04:00.186485 1010714 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0729 14:04:00.186502 1010714 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0729 14:04:00.186514 1010714 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0729 14:04:00.186524 1010714 command_runner.go:130] > [crio.runtime]
	I0729 14:04:00.186536 1010714 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0729 14:04:00.186547 1010714 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0729 14:04:00.186556 1010714 command_runner.go:130] > # "nofile=1024:2048"
	I0729 14:04:00.186567 1010714 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0729 14:04:00.186681 1010714 command_runner.go:130] > # default_ulimits = [
	I0729 14:04:00.186811 1010714 command_runner.go:130] > # ]
	I0729 14:04:00.186826 1010714 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0729 14:04:00.187097 1010714 command_runner.go:130] > # no_pivot = false
	I0729 14:04:00.187112 1010714 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0729 14:04:00.187125 1010714 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0729 14:04:00.187450 1010714 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0729 14:04:00.187465 1010714 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0729 14:04:00.187473 1010714 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0729 14:04:00.187483 1010714 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 14:04:00.187605 1010714 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0729 14:04:00.187617 1010714 command_runner.go:130] > # Cgroup setting for conmon
	I0729 14:04:00.187628 1010714 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0729 14:04:00.187744 1010714 command_runner.go:130] > conmon_cgroup = "pod"
	I0729 14:04:00.187760 1010714 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0729 14:04:00.187767 1010714 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0729 14:04:00.187778 1010714 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 14:04:00.187787 1010714 command_runner.go:130] > conmon_env = [
	I0729 14:04:00.187844 1010714 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 14:04:00.187911 1010714 command_runner.go:130] > ]
	I0729 14:04:00.187924 1010714 command_runner.go:130] > # Additional environment variables to set for all the
	I0729 14:04:00.187932 1010714 command_runner.go:130] > # containers. These are overridden if set in the
	I0729 14:04:00.187943 1010714 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0729 14:04:00.188047 1010714 command_runner.go:130] > # default_env = [
	I0729 14:04:00.188170 1010714 command_runner.go:130] > # ]
	I0729 14:04:00.188185 1010714 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0729 14:04:00.188197 1010714 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0729 14:04:00.189805 1010714 command_runner.go:130] > # selinux = false
	I0729 14:04:00.189828 1010714 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0729 14:04:00.189836 1010714 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0729 14:04:00.189844 1010714 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0729 14:04:00.189849 1010714 command_runner.go:130] > # seccomp_profile = ""
	I0729 14:04:00.189854 1010714 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0729 14:04:00.189859 1010714 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0729 14:04:00.189865 1010714 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0729 14:04:00.189870 1010714 command_runner.go:130] > # which might increase security.
	I0729 14:04:00.189875 1010714 command_runner.go:130] > # This option is currently deprecated,
	I0729 14:04:00.189884 1010714 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0729 14:04:00.189892 1010714 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0729 14:04:00.189899 1010714 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0729 14:04:00.189907 1010714 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0729 14:04:00.189915 1010714 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0729 14:04:00.189922 1010714 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0729 14:04:00.189929 1010714 command_runner.go:130] > # This option supports live configuration reload.
	I0729 14:04:00.189934 1010714 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0729 14:04:00.189941 1010714 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0729 14:04:00.189948 1010714 command_runner.go:130] > # the cgroup blockio controller.
	I0729 14:04:00.189952 1010714 command_runner.go:130] > # blockio_config_file = ""
	I0729 14:04:00.189960 1010714 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0729 14:04:00.189966 1010714 command_runner.go:130] > # blockio parameters.
	I0729 14:04:00.189971 1010714 command_runner.go:130] > # blockio_reload = false
	I0729 14:04:00.189979 1010714 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0729 14:04:00.189983 1010714 command_runner.go:130] > # irqbalance daemon.
	I0729 14:04:00.189991 1010714 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0729 14:04:00.189999 1010714 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0729 14:04:00.190006 1010714 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0729 14:04:00.190014 1010714 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0729 14:04:00.190029 1010714 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0729 14:04:00.190037 1010714 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0729 14:04:00.190044 1010714 command_runner.go:130] > # This option supports live configuration reload.
	I0729 14:04:00.190048 1010714 command_runner.go:130] > # rdt_config_file = ""
	I0729 14:04:00.190056 1010714 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0729 14:04:00.190060 1010714 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0729 14:04:00.190077 1010714 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0729 14:04:00.190084 1010714 command_runner.go:130] > # separate_pull_cgroup = ""
	I0729 14:04:00.190090 1010714 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0729 14:04:00.190099 1010714 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0729 14:04:00.190105 1010714 command_runner.go:130] > # will be added.
	I0729 14:04:00.190111 1010714 command_runner.go:130] > # default_capabilities = [
	I0729 14:04:00.190116 1010714 command_runner.go:130] > # 	"CHOWN",
	I0729 14:04:00.190120 1010714 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0729 14:04:00.190126 1010714 command_runner.go:130] > # 	"FSETID",
	I0729 14:04:00.190130 1010714 command_runner.go:130] > # 	"FOWNER",
	I0729 14:04:00.190136 1010714 command_runner.go:130] > # 	"SETGID",
	I0729 14:04:00.190140 1010714 command_runner.go:130] > # 	"SETUID",
	I0729 14:04:00.190145 1010714 command_runner.go:130] > # 	"SETPCAP",
	I0729 14:04:00.190149 1010714 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0729 14:04:00.190152 1010714 command_runner.go:130] > # 	"KILL",
	I0729 14:04:00.190156 1010714 command_runner.go:130] > # ]
	I0729 14:04:00.190164 1010714 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0729 14:04:00.190172 1010714 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0729 14:04:00.190177 1010714 command_runner.go:130] > # add_inheritable_capabilities = false
	I0729 14:04:00.190184 1010714 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0729 14:04:00.190192 1010714 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 14:04:00.190197 1010714 command_runner.go:130] > default_sysctls = [
	I0729 14:04:00.190201 1010714 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0729 14:04:00.190207 1010714 command_runner.go:130] > ]
	I0729 14:04:00.190212 1010714 command_runner.go:130] > # List of devices on the host that a
	I0729 14:04:00.190221 1010714 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0729 14:04:00.190227 1010714 command_runner.go:130] > # allowed_devices = [
	I0729 14:04:00.190231 1010714 command_runner.go:130] > # 	"/dev/fuse",
	I0729 14:04:00.190234 1010714 command_runner.go:130] > # ]
	I0729 14:04:00.190239 1010714 command_runner.go:130] > # List of additional devices. specified as
	I0729 14:04:00.190248 1010714 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0729 14:04:00.190253 1010714 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0729 14:04:00.190259 1010714 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 14:04:00.190264 1010714 command_runner.go:130] > # additional_devices = [
	I0729 14:04:00.190267 1010714 command_runner.go:130] > # ]
	I0729 14:04:00.190274 1010714 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0729 14:04:00.190281 1010714 command_runner.go:130] > # cdi_spec_dirs = [
	I0729 14:04:00.190286 1010714 command_runner.go:130] > # 	"/etc/cdi",
	I0729 14:04:00.190290 1010714 command_runner.go:130] > # 	"/var/run/cdi",
	I0729 14:04:00.190295 1010714 command_runner.go:130] > # ]
	I0729 14:04:00.190302 1010714 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0729 14:04:00.190309 1010714 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0729 14:04:00.190314 1010714 command_runner.go:130] > # Defaults to false.
	I0729 14:04:00.190319 1010714 command_runner.go:130] > # device_ownership_from_security_context = false
	I0729 14:04:00.190328 1010714 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0729 14:04:00.190334 1010714 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0729 14:04:00.190340 1010714 command_runner.go:130] > # hooks_dir = [
	I0729 14:04:00.190344 1010714 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0729 14:04:00.190348 1010714 command_runner.go:130] > # ]
	I0729 14:04:00.190356 1010714 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0729 14:04:00.190362 1010714 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0729 14:04:00.190369 1010714 command_runner.go:130] > # its default mounts from the following two files:
	I0729 14:04:00.190373 1010714 command_runner.go:130] > #
	I0729 14:04:00.190379 1010714 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0729 14:04:00.190387 1010714 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0729 14:04:00.190394 1010714 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0729 14:04:00.190397 1010714 command_runner.go:130] > #
	I0729 14:04:00.190403 1010714 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0729 14:04:00.190411 1010714 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0729 14:04:00.190419 1010714 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0729 14:04:00.190426 1010714 command_runner.go:130] > #      only add mounts it finds in this file.
	I0729 14:04:00.190429 1010714 command_runner.go:130] > #
	I0729 14:04:00.190433 1010714 command_runner.go:130] > # default_mounts_file = ""
	I0729 14:04:00.190440 1010714 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0729 14:04:00.190446 1010714 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0729 14:04:00.190453 1010714 command_runner.go:130] > pids_limit = 1024
	I0729 14:04:00.190459 1010714 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0729 14:04:00.190467 1010714 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0729 14:04:00.190475 1010714 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0729 14:04:00.190484 1010714 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0729 14:04:00.190490 1010714 command_runner.go:130] > # log_size_max = -1
	I0729 14:04:00.190497 1010714 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0729 14:04:00.190503 1010714 command_runner.go:130] > # log_to_journald = false
	I0729 14:04:00.190512 1010714 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0729 14:04:00.190521 1010714 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0729 14:04:00.190533 1010714 command_runner.go:130] > # Path to directory for container attach sockets.
	I0729 14:04:00.190540 1010714 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0729 14:04:00.190545 1010714 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0729 14:04:00.190551 1010714 command_runner.go:130] > # bind_mount_prefix = ""
	I0729 14:04:00.190557 1010714 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0729 14:04:00.190563 1010714 command_runner.go:130] > # read_only = false
	I0729 14:04:00.190569 1010714 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0729 14:04:00.190577 1010714 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0729 14:04:00.190584 1010714 command_runner.go:130] > # live configuration reload.
	I0729 14:04:00.190588 1010714 command_runner.go:130] > # log_level = "info"
	I0729 14:04:00.190595 1010714 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0729 14:04:00.190600 1010714 command_runner.go:130] > # This option supports live configuration reload.
	I0729 14:04:00.190606 1010714 command_runner.go:130] > # log_filter = ""
	I0729 14:04:00.190611 1010714 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0729 14:04:00.190621 1010714 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0729 14:04:00.190625 1010714 command_runner.go:130] > # separated by comma.
	I0729 14:04:00.190633 1010714 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 14:04:00.190643 1010714 command_runner.go:130] > # uid_mappings = ""
	I0729 14:04:00.190651 1010714 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0729 14:04:00.190658 1010714 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0729 14:04:00.190664 1010714 command_runner.go:130] > # separated by comma.
	I0729 14:04:00.190672 1010714 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 14:04:00.190679 1010714 command_runner.go:130] > # gid_mappings = ""
	I0729 14:04:00.190684 1010714 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0729 14:04:00.190692 1010714 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 14:04:00.190700 1010714 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 14:04:00.190708 1010714 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 14:04:00.190715 1010714 command_runner.go:130] > # minimum_mappable_uid = -1
	I0729 14:04:00.190721 1010714 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0729 14:04:00.190729 1010714 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 14:04:00.190735 1010714 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 14:04:00.190744 1010714 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 14:04:00.190750 1010714 command_runner.go:130] > # minimum_mappable_gid = -1
	I0729 14:04:00.190760 1010714 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0729 14:04:00.190768 1010714 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0729 14:04:00.190773 1010714 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0729 14:04:00.190784 1010714 command_runner.go:130] > # ctr_stop_timeout = 30
	I0729 14:04:00.190792 1010714 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0729 14:04:00.190799 1010714 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0729 14:04:00.190807 1010714 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0729 14:04:00.190814 1010714 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0729 14:04:00.190818 1010714 command_runner.go:130] > drop_infra_ctr = false
	I0729 14:04:00.190827 1010714 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0729 14:04:00.190832 1010714 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0729 14:04:00.190841 1010714 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0729 14:04:00.190845 1010714 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0729 14:04:00.190851 1010714 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0729 14:04:00.190859 1010714 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0729 14:04:00.190864 1010714 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0729 14:04:00.190870 1010714 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0729 14:04:00.190876 1010714 command_runner.go:130] > # shared_cpuset = ""
	I0729 14:04:00.190881 1010714 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0729 14:04:00.190886 1010714 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0729 14:04:00.190891 1010714 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0729 14:04:00.190898 1010714 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0729 14:04:00.190902 1010714 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0729 14:04:00.190907 1010714 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0729 14:04:00.190914 1010714 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0729 14:04:00.190918 1010714 command_runner.go:130] > # enable_criu_support = false
	I0729 14:04:00.190925 1010714 command_runner.go:130] > # Enable/disable the generation of the container,
	I0729 14:04:00.190931 1010714 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0729 14:04:00.190937 1010714 command_runner.go:130] > # enable_pod_events = false
	I0729 14:04:00.190942 1010714 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 14:04:00.190950 1010714 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 14:04:00.190954 1010714 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0729 14:04:00.190960 1010714 command_runner.go:130] > # default_runtime = "runc"
	I0729 14:04:00.190965 1010714 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0729 14:04:00.190971 1010714 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0729 14:04:00.190987 1010714 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0729 14:04:00.190996 1010714 command_runner.go:130] > # creation as a file is not desired either.
	I0729 14:04:00.191008 1010714 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0729 14:04:00.191022 1010714 command_runner.go:130] > # the hostname is being managed dynamically.
	I0729 14:04:00.191033 1010714 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0729 14:04:00.191037 1010714 command_runner.go:130] > # ]
	I0729 14:04:00.191049 1010714 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0729 14:04:00.191060 1010714 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0729 14:04:00.191071 1010714 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0729 14:04:00.191081 1010714 command_runner.go:130] > # Each entry in the table should follow the format:
	I0729 14:04:00.191089 1010714 command_runner.go:130] > #
	I0729 14:04:00.191096 1010714 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0729 14:04:00.191106 1010714 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0729 14:04:00.191137 1010714 command_runner.go:130] > # runtime_type = "oci"
	I0729 14:04:00.191144 1010714 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0729 14:04:00.191149 1010714 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0729 14:04:00.191154 1010714 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0729 14:04:00.191158 1010714 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0729 14:04:00.191167 1010714 command_runner.go:130] > # monitor_env = []
	I0729 14:04:00.191171 1010714 command_runner.go:130] > # privileged_without_host_devices = false
	I0729 14:04:00.191178 1010714 command_runner.go:130] > # allowed_annotations = []
	I0729 14:04:00.191184 1010714 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0729 14:04:00.191190 1010714 command_runner.go:130] > # Where:
	I0729 14:04:00.191194 1010714 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0729 14:04:00.191200 1010714 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0729 14:04:00.191209 1010714 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0729 14:04:00.191215 1010714 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0729 14:04:00.191220 1010714 command_runner.go:130] > #   in $PATH.
	I0729 14:04:00.191226 1010714 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0729 14:04:00.191233 1010714 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0729 14:04:00.191239 1010714 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0729 14:04:00.191244 1010714 command_runner.go:130] > #   state.
	I0729 14:04:00.191250 1010714 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0729 14:04:00.191258 1010714 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0729 14:04:00.191266 1010714 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0729 14:04:00.191271 1010714 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0729 14:04:00.191279 1010714 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0729 14:04:00.191286 1010714 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0729 14:04:00.191292 1010714 command_runner.go:130] > #   The currently recognized values are:
	I0729 14:04:00.191298 1010714 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0729 14:04:00.191306 1010714 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0729 14:04:00.191317 1010714 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0729 14:04:00.191325 1010714 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0729 14:04:00.191334 1010714 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0729 14:04:00.191343 1010714 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0729 14:04:00.191351 1010714 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0729 14:04:00.191360 1010714 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0729 14:04:00.191368 1010714 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0729 14:04:00.191374 1010714 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0729 14:04:00.191381 1010714 command_runner.go:130] > #   deprecated option "conmon".
	I0729 14:04:00.191387 1010714 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0729 14:04:00.191394 1010714 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0729 14:04:00.191402 1010714 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0729 14:04:00.191410 1010714 command_runner.go:130] > #   should be moved to the container's cgroup
	I0729 14:04:00.191416 1010714 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0729 14:04:00.191423 1010714 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0729 14:04:00.191429 1010714 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0729 14:04:00.191436 1010714 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0729 14:04:00.191439 1010714 command_runner.go:130] > #
	I0729 14:04:00.191443 1010714 command_runner.go:130] > # Using the seccomp notifier feature:
	I0729 14:04:00.191448 1010714 command_runner.go:130] > #
	I0729 14:04:00.191454 1010714 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0729 14:04:00.191462 1010714 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0729 14:04:00.191464 1010714 command_runner.go:130] > #
	I0729 14:04:00.191471 1010714 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0729 14:04:00.191478 1010714 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0729 14:04:00.191483 1010714 command_runner.go:130] > #
	I0729 14:04:00.191489 1010714 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0729 14:04:00.191494 1010714 command_runner.go:130] > # feature.
	I0729 14:04:00.191497 1010714 command_runner.go:130] > #
	I0729 14:04:00.191506 1010714 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0729 14:04:00.191514 1010714 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0729 14:04:00.191522 1010714 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0729 14:04:00.191530 1010714 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0729 14:04:00.191536 1010714 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0729 14:04:00.191541 1010714 command_runner.go:130] > #
	I0729 14:04:00.191551 1010714 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0729 14:04:00.191564 1010714 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0729 14:04:00.191569 1010714 command_runner.go:130] > #
	I0729 14:04:00.191575 1010714 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0729 14:04:00.191583 1010714 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0729 14:04:00.191588 1010714 command_runner.go:130] > #
	I0729 14:04:00.191596 1010714 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0729 14:04:00.191604 1010714 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0729 14:04:00.191608 1010714 command_runner.go:130] > # limitation.
	I0729 14:04:00.191612 1010714 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0729 14:04:00.191619 1010714 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0729 14:04:00.191623 1010714 command_runner.go:130] > runtime_type = "oci"
	I0729 14:04:00.191629 1010714 command_runner.go:130] > runtime_root = "/run/runc"
	I0729 14:04:00.191632 1010714 command_runner.go:130] > runtime_config_path = ""
	I0729 14:04:00.191637 1010714 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0729 14:04:00.191643 1010714 command_runner.go:130] > monitor_cgroup = "pod"
	I0729 14:04:00.191647 1010714 command_runner.go:130] > monitor_exec_cgroup = ""
	I0729 14:04:00.191653 1010714 command_runner.go:130] > monitor_env = [
	I0729 14:04:00.191658 1010714 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 14:04:00.191664 1010714 command_runner.go:130] > ]
	I0729 14:04:00.191670 1010714 command_runner.go:130] > privileged_without_host_devices = false
	I0729 14:04:00.191678 1010714 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0729 14:04:00.191685 1010714 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0729 14:04:00.191691 1010714 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0729 14:04:00.191700 1010714 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0729 14:04:00.191710 1010714 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0729 14:04:00.191718 1010714 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0729 14:04:00.191726 1010714 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0729 14:04:00.191735 1010714 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0729 14:04:00.191742 1010714 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0729 14:04:00.191749 1010714 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0729 14:04:00.191759 1010714 command_runner.go:130] > # Example:
	I0729 14:04:00.191763 1010714 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0729 14:04:00.191767 1010714 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0729 14:04:00.191772 1010714 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0729 14:04:00.191776 1010714 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0729 14:04:00.191780 1010714 command_runner.go:130] > # cpuset = 0
	I0729 14:04:00.191784 1010714 command_runner.go:130] > # cpushares = "0-1"
	I0729 14:04:00.191788 1010714 command_runner.go:130] > # Where:
	I0729 14:04:00.191794 1010714 command_runner.go:130] > # The workload name is workload-type.
	I0729 14:04:00.191801 1010714 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0729 14:04:00.191805 1010714 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0729 14:04:00.191810 1010714 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0729 14:04:00.191817 1010714 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0729 14:04:00.191823 1010714 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0729 14:04:00.191827 1010714 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0729 14:04:00.191833 1010714 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0729 14:04:00.191837 1010714 command_runner.go:130] > # Default value is set to true
	I0729 14:04:00.191841 1010714 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0729 14:04:00.191845 1010714 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0729 14:04:00.191850 1010714 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0729 14:04:00.191853 1010714 command_runner.go:130] > # Default value is set to 'false'
	I0729 14:04:00.191857 1010714 command_runner.go:130] > # disable_hostport_mapping = false
	I0729 14:04:00.191863 1010714 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0729 14:04:00.191866 1010714 command_runner.go:130] > #
	I0729 14:04:00.191871 1010714 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0729 14:04:00.191877 1010714 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0729 14:04:00.191882 1010714 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0729 14:04:00.191888 1010714 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0729 14:04:00.191892 1010714 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0729 14:04:00.191895 1010714 command_runner.go:130] > [crio.image]
	I0729 14:04:00.191901 1010714 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0729 14:04:00.191904 1010714 command_runner.go:130] > # default_transport = "docker://"
	I0729 14:04:00.191910 1010714 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0729 14:04:00.191915 1010714 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0729 14:04:00.191919 1010714 command_runner.go:130] > # global_auth_file = ""
	I0729 14:04:00.191923 1010714 command_runner.go:130] > # The image used to instantiate infra containers.
	I0729 14:04:00.191928 1010714 command_runner.go:130] > # This option supports live configuration reload.
	I0729 14:04:00.191932 1010714 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0729 14:04:00.191938 1010714 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0729 14:04:00.191944 1010714 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0729 14:04:00.191949 1010714 command_runner.go:130] > # This option supports live configuration reload.
	I0729 14:04:00.191955 1010714 command_runner.go:130] > # pause_image_auth_file = ""
	I0729 14:04:00.191961 1010714 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0729 14:04:00.191969 1010714 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0729 14:04:00.191979 1010714 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0729 14:04:00.191986 1010714 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0729 14:04:00.191992 1010714 command_runner.go:130] > # pause_command = "/pause"
	I0729 14:04:00.191998 1010714 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0729 14:04:00.192005 1010714 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0729 14:04:00.192011 1010714 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0729 14:04:00.192019 1010714 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0729 14:04:00.192025 1010714 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0729 14:04:00.192032 1010714 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0729 14:04:00.192038 1010714 command_runner.go:130] > # pinned_images = [
	I0729 14:04:00.192041 1010714 command_runner.go:130] > # ]
	I0729 14:04:00.192048 1010714 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0729 14:04:00.192056 1010714 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0729 14:04:00.192064 1010714 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0729 14:04:00.192070 1010714 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0729 14:04:00.192077 1010714 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0729 14:04:00.192084 1010714 command_runner.go:130] > # signature_policy = ""
	I0729 14:04:00.192089 1010714 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0729 14:04:00.192097 1010714 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0729 14:04:00.192104 1010714 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0729 14:04:00.192111 1010714 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0729 14:04:00.192119 1010714 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0729 14:04:00.192124 1010714 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0729 14:04:00.192131 1010714 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0729 14:04:00.192137 1010714 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0729 14:04:00.192143 1010714 command_runner.go:130] > # changing them here.
	I0729 14:04:00.192147 1010714 command_runner.go:130] > # insecure_registries = [
	I0729 14:04:00.192152 1010714 command_runner.go:130] > # ]
	I0729 14:04:00.192158 1010714 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0729 14:04:00.192165 1010714 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0729 14:04:00.192169 1010714 command_runner.go:130] > # image_volumes = "mkdir"
	I0729 14:04:00.192176 1010714 command_runner.go:130] > # Temporary directory to use for storing big files
	I0729 14:04:00.192180 1010714 command_runner.go:130] > # big_files_temporary_dir = ""
	I0729 14:04:00.192191 1010714 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0729 14:04:00.192197 1010714 command_runner.go:130] > # CNI plugins.
	I0729 14:04:00.192200 1010714 command_runner.go:130] > [crio.network]
	I0729 14:04:00.192205 1010714 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0729 14:04:00.192215 1010714 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0729 14:04:00.192221 1010714 command_runner.go:130] > # cni_default_network = ""
	I0729 14:04:00.192226 1010714 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0729 14:04:00.192233 1010714 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0729 14:04:00.192237 1010714 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0729 14:04:00.192244 1010714 command_runner.go:130] > # plugin_dirs = [
	I0729 14:04:00.192247 1010714 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0729 14:04:00.192250 1010714 command_runner.go:130] > # ]
	I0729 14:04:00.192256 1010714 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0729 14:04:00.192260 1010714 command_runner.go:130] > [crio.metrics]
	I0729 14:04:00.192265 1010714 command_runner.go:130] > # Globally enable or disable metrics support.
	I0729 14:04:00.192271 1010714 command_runner.go:130] > enable_metrics = true
	I0729 14:04:00.192275 1010714 command_runner.go:130] > # Specify enabled metrics collectors.
	I0729 14:04:00.192285 1010714 command_runner.go:130] > # Per default all metrics are enabled.
	I0729 14:04:00.192292 1010714 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0729 14:04:00.192298 1010714 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0729 14:04:00.192304 1010714 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0729 14:04:00.192310 1010714 command_runner.go:130] > # metrics_collectors = [
	I0729 14:04:00.192313 1010714 command_runner.go:130] > # 	"operations",
	I0729 14:04:00.192318 1010714 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0729 14:04:00.192324 1010714 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0729 14:04:00.192328 1010714 command_runner.go:130] > # 	"operations_errors",
	I0729 14:04:00.192333 1010714 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0729 14:04:00.192337 1010714 command_runner.go:130] > # 	"image_pulls_by_name",
	I0729 14:04:00.192344 1010714 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0729 14:04:00.192348 1010714 command_runner.go:130] > # 	"image_pulls_failures",
	I0729 14:04:00.192354 1010714 command_runner.go:130] > # 	"image_pulls_successes",
	I0729 14:04:00.192358 1010714 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0729 14:04:00.192364 1010714 command_runner.go:130] > # 	"image_layer_reuse",
	I0729 14:04:00.192368 1010714 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0729 14:04:00.192374 1010714 command_runner.go:130] > # 	"containers_oom_total",
	I0729 14:04:00.192378 1010714 command_runner.go:130] > # 	"containers_oom",
	I0729 14:04:00.192385 1010714 command_runner.go:130] > # 	"processes_defunct",
	I0729 14:04:00.192389 1010714 command_runner.go:130] > # 	"operations_total",
	I0729 14:04:00.192395 1010714 command_runner.go:130] > # 	"operations_latency_seconds",
	I0729 14:04:00.192401 1010714 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0729 14:04:00.192424 1010714 command_runner.go:130] > # 	"operations_errors_total",
	I0729 14:04:00.192435 1010714 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0729 14:04:00.192442 1010714 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0729 14:04:00.192451 1010714 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0729 14:04:00.192456 1010714 command_runner.go:130] > # 	"image_pulls_success_total",
	I0729 14:04:00.192462 1010714 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0729 14:04:00.192466 1010714 command_runner.go:130] > # 	"containers_oom_count_total",
	I0729 14:04:00.192473 1010714 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0729 14:04:00.192477 1010714 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0729 14:04:00.192486 1010714 command_runner.go:130] > # ]
	I0729 14:04:00.192493 1010714 command_runner.go:130] > # The port on which the metrics server will listen.
	I0729 14:04:00.192497 1010714 command_runner.go:130] > # metrics_port = 9090
	I0729 14:04:00.192505 1010714 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0729 14:04:00.192511 1010714 command_runner.go:130] > # metrics_socket = ""
	I0729 14:04:00.192516 1010714 command_runner.go:130] > # The certificate for the secure metrics server.
	I0729 14:04:00.192524 1010714 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0729 14:04:00.192530 1010714 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0729 14:04:00.192537 1010714 command_runner.go:130] > # certificate on any modification event.
	I0729 14:04:00.192541 1010714 command_runner.go:130] > # metrics_cert = ""
	I0729 14:04:00.192548 1010714 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0729 14:04:00.192553 1010714 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0729 14:04:00.192559 1010714 command_runner.go:130] > # metrics_key = ""
	I0729 14:04:00.192565 1010714 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0729 14:04:00.192571 1010714 command_runner.go:130] > [crio.tracing]
	I0729 14:04:00.192576 1010714 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0729 14:04:00.192584 1010714 command_runner.go:130] > # enable_tracing = false
	I0729 14:04:00.192591 1010714 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0729 14:04:00.192596 1010714 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0729 14:04:00.192604 1010714 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0729 14:04:00.192611 1010714 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0729 14:04:00.192615 1010714 command_runner.go:130] > # CRI-O NRI configuration.
	I0729 14:04:00.192619 1010714 command_runner.go:130] > [crio.nri]
	I0729 14:04:00.192625 1010714 command_runner.go:130] > # Globally enable or disable NRI.
	I0729 14:04:00.192631 1010714 command_runner.go:130] > # enable_nri = false
	I0729 14:04:00.192635 1010714 command_runner.go:130] > # NRI socket to listen on.
	I0729 14:04:00.192642 1010714 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0729 14:04:00.192646 1010714 command_runner.go:130] > # NRI plugin directory to use.
	I0729 14:04:00.192653 1010714 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0729 14:04:00.192658 1010714 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0729 14:04:00.192665 1010714 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0729 14:04:00.192670 1010714 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0729 14:04:00.192677 1010714 command_runner.go:130] > # nri_disable_connections = false
	I0729 14:04:00.192682 1010714 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0729 14:04:00.192689 1010714 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0729 14:04:00.192694 1010714 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0729 14:04:00.192700 1010714 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0729 14:04:00.192706 1010714 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0729 14:04:00.192711 1010714 command_runner.go:130] > [crio.stats]
	I0729 14:04:00.192717 1010714 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0729 14:04:00.192724 1010714 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0729 14:04:00.192729 1010714 command_runner.go:130] > # stats_collection_period = 0
	I0729 14:04:00.192756 1010714 command_runner.go:130] ! time="2024-07-29 14:04:00.143721054Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0729 14:04:00.192773 1010714 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0729 14:04:00.192916 1010714 cni.go:84] Creating CNI manager for ""
	I0729 14:04:00.192930 1010714 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 14:04:00.192942 1010714 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:04:00.192975 1010714 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.69 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-999945 NodeName:multinode-999945 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 14:04:00.193134 1010714 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-999945"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:04:00.193206 1010714 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 14:04:00.203802 1010714 command_runner.go:130] > kubeadm
	I0729 14:04:00.203824 1010714 command_runner.go:130] > kubectl
	I0729 14:04:00.203828 1010714 command_runner.go:130] > kubelet
	I0729 14:04:00.203859 1010714 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:04:00.203909 1010714 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:04:00.213929 1010714 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0729 14:04:00.230869 1010714 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 14:04:00.247645 1010714 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0729 14:04:00.264195 1010714 ssh_runner.go:195] Run: grep 192.168.39.69	control-plane.minikube.internal$ /etc/hosts
	I0729 14:04:00.268101 1010714 command_runner.go:130] > 192.168.39.69	control-plane.minikube.internal
	I0729 14:04:00.268187 1010714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:04:00.406769 1010714 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:04:00.422715 1010714 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/multinode-999945 for IP: 192.168.39.69
	I0729 14:04:00.422741 1010714 certs.go:194] generating shared ca certs ...
	I0729 14:04:00.422758 1010714 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:04:00.422961 1010714 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:04:00.423022 1010714 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:04:00.423037 1010714 certs.go:256] generating profile certs ...
	I0729 14:04:00.423150 1010714 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/multinode-999945/client.key
	I0729 14:04:00.423230 1010714 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/multinode-999945/apiserver.key.f8bd8e5a
	I0729 14:04:00.423352 1010714 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/multinode-999945/proxy-client.key
	I0729 14:04:00.423374 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 14:04:00.423396 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 14:04:00.423414 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 14:04:00.423430 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 14:04:00.423446 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/multinode-999945/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 14:04:00.423467 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/multinode-999945/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 14:04:00.423488 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/multinode-999945/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 14:04:00.423528 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/multinode-999945/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 14:04:00.423606 1010714 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:04:00.423658 1010714 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:04:00.423672 1010714 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:04:00.423702 1010714 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:04:00.423735 1010714 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:04:00.423763 1010714 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:04:00.423817 1010714 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:04:00.423889 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem -> /usr/share/ca-certificates/982046.pem
	I0729 14:04:00.423919 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> /usr/share/ca-certificates/9820462.pem
	I0729 14:04:00.423939 1010714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:04:00.424633 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:04:00.449331 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:04:00.473140 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:04:00.496846 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:04:00.520630 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/multinode-999945/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 14:04:00.544044 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/multinode-999945/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 14:04:00.567002 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/multinode-999945/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:04:00.589776 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/multinode-999945/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 14:04:00.612946 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:04:00.636353 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:04:00.659951 1010714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:04:00.682752 1010714 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:04:00.699133 1010714 ssh_runner.go:195] Run: openssl version
	I0729 14:04:00.705027 1010714 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0729 14:04:00.705193 1010714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:04:00.716114 1010714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:04:00.720996 1010714 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:04:00.721028 1010714 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:04:00.721063 1010714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:04:00.726623 1010714 command_runner.go:130] > 51391683
	I0729 14:04:00.726672 1010714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:04:00.736102 1010714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:04:00.747328 1010714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:04:00.751769 1010714 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:04:00.751803 1010714 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:04:00.751864 1010714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:04:00.757478 1010714 command_runner.go:130] > 3ec20f2e
	I0729 14:04:00.757547 1010714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:04:00.766878 1010714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:04:00.777798 1010714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:04:00.782185 1010714 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:04:00.782309 1010714 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:04:00.782363 1010714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:04:00.788010 1010714 command_runner.go:130] > b5213941
	I0729 14:04:00.788078 1010714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:04:00.797594 1010714 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:04:00.801866 1010714 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:04:00.801886 1010714 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0729 14:04:00.801891 1010714 command_runner.go:130] > Device: 253,1	Inode: 4197931     Links: 1
	I0729 14:04:00.801901 1010714 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 14:04:00.801910 1010714 command_runner.go:130] > Access: 2024-07-29 13:57:16.043615079 +0000
	I0729 14:04:00.801918 1010714 command_runner.go:130] > Modify: 2024-07-29 13:57:16.043615079 +0000
	I0729 14:04:00.801926 1010714 command_runner.go:130] > Change: 2024-07-29 13:57:16.043615079 +0000
	I0729 14:04:00.801936 1010714 command_runner.go:130] >  Birth: 2024-07-29 13:57:16.043615079 +0000
	I0729 14:04:00.802083 1010714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:04:00.807782 1010714 command_runner.go:130] > Certificate will not expire
	I0729 14:04:00.807864 1010714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:04:00.813298 1010714 command_runner.go:130] > Certificate will not expire
	I0729 14:04:00.813504 1010714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:04:00.819275 1010714 command_runner.go:130] > Certificate will not expire
	I0729 14:04:00.819552 1010714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:04:00.825084 1010714 command_runner.go:130] > Certificate will not expire
	I0729 14:04:00.825149 1010714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:04:00.830494 1010714 command_runner.go:130] > Certificate will not expire
	I0729 14:04:00.830658 1010714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:04:00.835933 1010714 command_runner.go:130] > Certificate will not expire
	I0729 14:04:00.836154 1010714 kubeadm.go:392] StartCluster: {Name:multinode-999945 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-999945 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.130 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.113 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:04:00.836304 1010714 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:04:00.836372 1010714 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:04:00.873019 1010714 command_runner.go:130] > 673291a0360a86e9ce27a8fb0c1488f3208c5b1ee6adc3f0333b5dd9e874fa03
	I0729 14:04:00.873041 1010714 command_runner.go:130] > bac92b71b7328bee70f714a308bbd8458aaf8df96506296f679ebce41f04aeb1
	I0729 14:04:00.873047 1010714 command_runner.go:130] > 59c0df3b19a4fbb788485f9a7433f0a56f9e3ef8785321f92b6742b431c4f1f0
	I0729 14:04:00.873053 1010714 command_runner.go:130] > 5897cf93acb9cea2fda1839684c4f12edfd20f2c53e557db0f2be0857f2b51ae
	I0729 14:04:00.873058 1010714 command_runner.go:130] > 0d786c480dd875d1ece431db9ee5238921fe265e114808a871b24281d37e07f9
	I0729 14:04:00.873063 1010714 command_runner.go:130] > b6f9a6db526f19329a886495225c588e7652d4037d74af31bb25dc7e71df226a
	I0729 14:04:00.873068 1010714 command_runner.go:130] > 8793a1ffb1b8951601b8a86ceecf1ba3258808adbd0292621591d7d6df9cda3a
	I0729 14:04:00.873074 1010714 command_runner.go:130] > 5de1847d7078b112d019dd0cb882d00e39d73aced580db855f936d8c1bf9eba9
	I0729 14:04:00.873092 1010714 cri.go:89] found id: "673291a0360a86e9ce27a8fb0c1488f3208c5b1ee6adc3f0333b5dd9e874fa03"
	I0729 14:04:00.873098 1010714 cri.go:89] found id: "bac92b71b7328bee70f714a308bbd8458aaf8df96506296f679ebce41f04aeb1"
	I0729 14:04:00.873102 1010714 cri.go:89] found id: "59c0df3b19a4fbb788485f9a7433f0a56f9e3ef8785321f92b6742b431c4f1f0"
	I0729 14:04:00.873105 1010714 cri.go:89] found id: "5897cf93acb9cea2fda1839684c4f12edfd20f2c53e557db0f2be0857f2b51ae"
	I0729 14:04:00.873107 1010714 cri.go:89] found id: "0d786c480dd875d1ece431db9ee5238921fe265e114808a871b24281d37e07f9"
	I0729 14:04:00.873111 1010714 cri.go:89] found id: "b6f9a6db526f19329a886495225c588e7652d4037d74af31bb25dc7e71df226a"
	I0729 14:04:00.873114 1010714 cri.go:89] found id: "8793a1ffb1b8951601b8a86ceecf1ba3258808adbd0292621591d7d6df9cda3a"
	I0729 14:04:00.873117 1010714 cri.go:89] found id: "5de1847d7078b112d019dd0cb882d00e39d73aced580db855f936d8c1bf9eba9"
	I0729 14:04:00.873120 1010714 cri.go:89] found id: ""
	I0729 14:04:00.873159 1010714 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 14:08:06 multinode-999945 crio[2851]: time="2024-07-29 14:08:06.699802622Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722262086699780142,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7793213b-a3a7-40fc-a05a-e08abe1ca104 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:08:06 multinode-999945 crio[2851]: time="2024-07-29 14:08:06.700346351Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7028466c-2adb-49db-97db-fedf5b30b0d1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:08:06 multinode-999945 crio[2851]: time="2024-07-29 14:08:06.700509930Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7028466c-2adb-49db-97db-fedf5b30b0d1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:08:06 multinode-999945 crio[2851]: time="2024-07-29 14:08:06.701223121Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f6d636e1aea8bbaad8f6b3cc0d58a263ad94888138adf99321ac6b61ae0b2881,PodSandboxId:4d524146d248bf0bfbe2c84cc55afe99abd2a759e6645122bd6fb9ab641ca65e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722261880725502114,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cfbps,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43fabb3b-28df-4938-9d21-ab3d93cf1306,},Annotations:map[string]string{io.kubernetes.container.hash: a0cb0440,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e64607e004a7e42791b0b84ea6755935eab67c0d5a3a35786d38182fb119265,PodSandboxId:bb0504cc5d2292b3efea562425ae6b52b1614d96d3a41fe4dbff6990b04f0687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722261847113704184,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwhbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c736673-1a18-424e-b6f2-564730f5378a,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0c71f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c5e0236d7f68d78fd78f55bfe26f82d81dd2e04a72fd0ab7b8cd9c3cd3cddf,PodSandboxId:5da0160d771ed533e7e965c35a37aec87c005658c7b12421fe8c06230d672221,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722261847020717443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aeb8f386-5491-4271-8d95-19f1bd0cda53,},Annotations:map[string]string{io.kubernetes.container.hash: 72e1ea8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5909526fbedac88783dcdd44d167327d8b66dd7f71061b6fc65eb4ca85b54c,PodSandboxId:bb933458be8e89a5adcac9e1ce994885825935413140efb05c34cf95f40c03d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722261847053181907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-67wml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6e2780-35e8-4d82-8742-1ad45f71071a,},Annotations:map[string]string{io.kubernetes.container.hash: 32f9e042,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{
\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6567da46b50a844e0cdd358420b86bc9b98dd43dbd8a4a5ca1e466561e3d99f1,PodSandboxId:0daefb8dfb252256fc36ff466e68598feedcce373eb9d111de10e6f1697afc37,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722261847026630865,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cs48t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b81b754-2cfd-4fc9-ae72-f6c2efdf9796,},Annotations:map[string]string{io.ku
bernetes.container.hash: a5ba5322,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44edb91a2178ee2d2639cd7cd3fb28761d56b2ea41dcca8f2a0a27e78cd8b9d,PodSandboxId:7360f01f059aa74bdd9eaa2945894195a8e7c9fa7b51c9c0efc5380f64dc20f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722261843174915574,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e13418088b51ecca57f25f1e11293367,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a390b907d6854d6c4e0061661e8fb937a4f0b1fb1f717c74fd4e32eddcca8a69,PodSandboxId:8a2c1b0c32a3c855e932569a73bee843ffc1d5114c413258d98968733b456759,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722261843179692993,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08aac3f6b20c0cd36fd0badd541f987f,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05319a5585b937d3170f647cde7f87563c590fd62da79c9dc277b6c1fb0e45a4,PodSandboxId:148ec5d4cb38ca009ffad143c2b3f0970640707f0fbbd9be30f4b1328c153e69,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722261843136102046,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e577ab825fe54ad672236fa59e74a6e3,},Annotations:map[string]string{io.kubernetes.container.hash: 35f7a4ce,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee15e0cf532db32c9d562fa7725b8f8d2d2952cf04a265ec07f301714d817e2,PodSandboxId:f97677f35c33c6ebd44f000123a58de343578e750bf5d7aa7fa05004f970dcc4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722261843121767603,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0892d0efca377961238f900e9d91dfde,},Annotations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63b44f5b1f313073a03c94a584539e7b7c50ba3f490962a058b83d8587c99328,PodSandboxId:a61067b3c2d3233d868650b50c4edc1688a48b695f55384c83035ea06ccfda5c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722261526112822801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cfbps,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43fabb3b-28df-4938-9d21-ab3d93cf1306,},Annotations:map[string]string{io.kubernetes.container.hash: a0cb0440,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bac92b71b7328bee70f714a308bbd8458aaf8df96506296f679ebce41f04aeb1,PodSandboxId:e6da8569cf93eee3a36799778c1a3f85174ee8aefbc37b9455ba48123c511ea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722261473375862657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-67wml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6e2780-35e8-4d82-8742-1ad45f71071a,},Annotations:map[string]string{io.kubernetes.container.hash: 32f9e042,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:673291a0360a86e9ce27a8fb0c1488f3208c5b1ee6adc3f0333b5dd9e874fa03,PodSandboxId:c5ce6ef88b8fd3c6ffd80578fda318e772633c025c1c519c4087fddae258a380,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722261473376844788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: aeb8f386-5491-4271-8d95-19f1bd0cda53,},Annotations:map[string]string{io.kubernetes.container.hash: 72e1ea8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c0df3b19a4fbb788485f9a7433f0a56f9e3ef8785321f92b6742b431c4f1f0,PodSandboxId:affa2cf556c82f23fdc68e3563ba3be3e9de0da97985d30b6bb896b41d5d1430,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722261461450837773,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwhbt,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3c736673-1a18-424e-b6f2-564730f5378a,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0c71f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5897cf93acb9cea2fda1839684c4f12edfd20f2c53e557db0f2be0857f2b51ae,PodSandboxId:2ae0795b9aae115b9ec551ed1403ad19ec5382003137172c54b591b3a3f53466,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722261459327131087,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cs48t,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 9b81b754-2cfd-4fc9-ae72-f6c2efdf9796,},Annotations:map[string]string{io.kubernetes.container.hash: a5ba5322,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6f9a6db526f19329a886495225c588e7652d4037d74af31bb25dc7e71df226a,PodSandboxId:726b9bdc6ce0ee7ddecacc4fbf1f3e5a12190c9e62b1db2b84b633671b0bbd9a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722261439873228946,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e577ab825fe54ad672236fa59e74a6
e3,},Annotations:map[string]string{io.kubernetes.container.hash: 35f7a4ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d786c480dd875d1ece431db9ee5238921fe265e114808a871b24281d37e07f9,PodSandboxId:d32773a146357a0103a36007d27da5a11fbd8c4253b5a41db29d68f36811e6ec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722261439917826327,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e13418088b51ecca57f25f1e11293367,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8793a1ffb1b8951601b8a86ceecf1ba3258808adbd0292621591d7d6df9cda3a,PodSandboxId:13ad0064d67154fbb24ac98097855d62c6e1e1e1fedb7da5635796b97e8cbd2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722261439842926684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0892d0efca377961238f900e9d91dfde,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de1847d7078b112d019dd0cb882d00e39d73aced580db855f936d8c1bf9eba9,PodSandboxId:0a6da18c8cf516c6c2140913fcf4a1c304d8215e4c619dc40db84b9f776a4f89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722261439835285795,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08aac3f6b20c0cd36fd0badd541f987f,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7028466c-2adb-49db-97db-fedf5b30b0d1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:08:06 multinode-999945 crio[2851]: time="2024-07-29 14:08:06.742498532Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=36b76027-4dc6-441f-9e41-e499faa56cc0 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:08:06 multinode-999945 crio[2851]: time="2024-07-29 14:08:06.742697030Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36b76027-4dc6-441f-9e41-e499faa56cc0 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:08:06 multinode-999945 crio[2851]: time="2024-07-29 14:08:06.748358879Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=261a2e9d-d3ce-4942-8387-5ffc086be088 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:08:06 multinode-999945 crio[2851]: time="2024-07-29 14:08:06.748796907Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722262086748773097,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=261a2e9d-d3ce-4942-8387-5ffc086be088 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:08:06 multinode-999945 crio[2851]: time="2024-07-29 14:08:06.749326245Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3edff356-e1ee-48a9-aa3f-f424e38fb693 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:08:06 multinode-999945 crio[2851]: time="2024-07-29 14:08:06.749417162Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3edff356-e1ee-48a9-aa3f-f424e38fb693 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:08:06 multinode-999945 crio[2851]: time="2024-07-29 14:08:06.750424620Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f6d636e1aea8bbaad8f6b3cc0d58a263ad94888138adf99321ac6b61ae0b2881,PodSandboxId:4d524146d248bf0bfbe2c84cc55afe99abd2a759e6645122bd6fb9ab641ca65e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722261880725502114,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cfbps,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43fabb3b-28df-4938-9d21-ab3d93cf1306,},Annotations:map[string]string{io.kubernetes.container.hash: a0cb0440,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e64607e004a7e42791b0b84ea6755935eab67c0d5a3a35786d38182fb119265,PodSandboxId:bb0504cc5d2292b3efea562425ae6b52b1614d96d3a41fe4dbff6990b04f0687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722261847113704184,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwhbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c736673-1a18-424e-b6f2-564730f5378a,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0c71f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c5e0236d7f68d78fd78f55bfe26f82d81dd2e04a72fd0ab7b8cd9c3cd3cddf,PodSandboxId:5da0160d771ed533e7e965c35a37aec87c005658c7b12421fe8c06230d672221,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722261847020717443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aeb8f386-5491-4271-8d95-19f1bd0cda53,},Annotations:map[string]string{io.kubernetes.container.hash: 72e1ea8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5909526fbedac88783dcdd44d167327d8b66dd7f71061b6fc65eb4ca85b54c,PodSandboxId:bb933458be8e89a5adcac9e1ce994885825935413140efb05c34cf95f40c03d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722261847053181907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-67wml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6e2780-35e8-4d82-8742-1ad45f71071a,},Annotations:map[string]string{io.kubernetes.container.hash: 32f9e042,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{
\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6567da46b50a844e0cdd358420b86bc9b98dd43dbd8a4a5ca1e466561e3d99f1,PodSandboxId:0daefb8dfb252256fc36ff466e68598feedcce373eb9d111de10e6f1697afc37,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722261847026630865,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cs48t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b81b754-2cfd-4fc9-ae72-f6c2efdf9796,},Annotations:map[string]string{io.ku
bernetes.container.hash: a5ba5322,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44edb91a2178ee2d2639cd7cd3fb28761d56b2ea41dcca8f2a0a27e78cd8b9d,PodSandboxId:7360f01f059aa74bdd9eaa2945894195a8e7c9fa7b51c9c0efc5380f64dc20f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722261843174915574,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e13418088b51ecca57f25f1e11293367,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a390b907d6854d6c4e0061661e8fb937a4f0b1fb1f717c74fd4e32eddcca8a69,PodSandboxId:8a2c1b0c32a3c855e932569a73bee843ffc1d5114c413258d98968733b456759,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722261843179692993,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08aac3f6b20c0cd36fd0badd541f987f,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05319a5585b937d3170f647cde7f87563c590fd62da79c9dc277b6c1fb0e45a4,PodSandboxId:148ec5d4cb38ca009ffad143c2b3f0970640707f0fbbd9be30f4b1328c153e69,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722261843136102046,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e577ab825fe54ad672236fa59e74a6e3,},Annotations:map[string]string{io.kubernetes.container.hash: 35f7a4ce,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee15e0cf532db32c9d562fa7725b8f8d2d2952cf04a265ec07f301714d817e2,PodSandboxId:f97677f35c33c6ebd44f000123a58de343578e750bf5d7aa7fa05004f970dcc4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722261843121767603,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0892d0efca377961238f900e9d91dfde,},Annotations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63b44f5b1f313073a03c94a584539e7b7c50ba3f490962a058b83d8587c99328,PodSandboxId:a61067b3c2d3233d868650b50c4edc1688a48b695f55384c83035ea06ccfda5c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722261526112822801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cfbps,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43fabb3b-28df-4938-9d21-ab3d93cf1306,},Annotations:map[string]string{io.kubernetes.container.hash: a0cb0440,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bac92b71b7328bee70f714a308bbd8458aaf8df96506296f679ebce41f04aeb1,PodSandboxId:e6da8569cf93eee3a36799778c1a3f85174ee8aefbc37b9455ba48123c511ea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722261473375862657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-67wml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6e2780-35e8-4d82-8742-1ad45f71071a,},Annotations:map[string]string{io.kubernetes.container.hash: 32f9e042,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:673291a0360a86e9ce27a8fb0c1488f3208c5b1ee6adc3f0333b5dd9e874fa03,PodSandboxId:c5ce6ef88b8fd3c6ffd80578fda318e772633c025c1c519c4087fddae258a380,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722261473376844788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: aeb8f386-5491-4271-8d95-19f1bd0cda53,},Annotations:map[string]string{io.kubernetes.container.hash: 72e1ea8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c0df3b19a4fbb788485f9a7433f0a56f9e3ef8785321f92b6742b431c4f1f0,PodSandboxId:affa2cf556c82f23fdc68e3563ba3be3e9de0da97985d30b6bb896b41d5d1430,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722261461450837773,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwhbt,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3c736673-1a18-424e-b6f2-564730f5378a,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0c71f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5897cf93acb9cea2fda1839684c4f12edfd20f2c53e557db0f2be0857f2b51ae,PodSandboxId:2ae0795b9aae115b9ec551ed1403ad19ec5382003137172c54b591b3a3f53466,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722261459327131087,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cs48t,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 9b81b754-2cfd-4fc9-ae72-f6c2efdf9796,},Annotations:map[string]string{io.kubernetes.container.hash: a5ba5322,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6f9a6db526f19329a886495225c588e7652d4037d74af31bb25dc7e71df226a,PodSandboxId:726b9bdc6ce0ee7ddecacc4fbf1f3e5a12190c9e62b1db2b84b633671b0bbd9a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722261439873228946,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e577ab825fe54ad672236fa59e74a6
e3,},Annotations:map[string]string{io.kubernetes.container.hash: 35f7a4ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d786c480dd875d1ece431db9ee5238921fe265e114808a871b24281d37e07f9,PodSandboxId:d32773a146357a0103a36007d27da5a11fbd8c4253b5a41db29d68f36811e6ec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722261439917826327,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e13418088b51ecca57f25f1e11293367,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8793a1ffb1b8951601b8a86ceecf1ba3258808adbd0292621591d7d6df9cda3a,PodSandboxId:13ad0064d67154fbb24ac98097855d62c6e1e1e1fedb7da5635796b97e8cbd2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722261439842926684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0892d0efca377961238f900e9d91dfde,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de1847d7078b112d019dd0cb882d00e39d73aced580db855f936d8c1bf9eba9,PodSandboxId:0a6da18c8cf516c6c2140913fcf4a1c304d8215e4c619dc40db84b9f776a4f89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722261439835285795,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08aac3f6b20c0cd36fd0badd541f987f,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3edff356-e1ee-48a9-aa3f-f424e38fb693 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:08:06 multinode-999945 crio[2851]: time="2024-07-29 14:08:06.796716433Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=01d62e07-aa3a-4a85-a844-acbcc73e931a name=/runtime.v1.RuntimeService/Version
	Jul 29 14:08:06 multinode-999945 crio[2851]: time="2024-07-29 14:08:06.796801589Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=01d62e07-aa3a-4a85-a844-acbcc73e931a name=/runtime.v1.RuntimeService/Version
	Jul 29 14:08:06 multinode-999945 crio[2851]: time="2024-07-29 14:08:06.797880978Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=efda03f6-d9f3-429c-9220-3d30efd24af0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:08:06 multinode-999945 crio[2851]: time="2024-07-29 14:08:06.798593395Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722262086798569719,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=efda03f6-d9f3-429c-9220-3d30efd24af0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:08:06 multinode-999945 crio[2851]: time="2024-07-29 14:08:06.799190003Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=822011d0-5ae3-4f2c-9213-4020663bef1d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:08:06 multinode-999945 crio[2851]: time="2024-07-29 14:08:06.799261792Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=822011d0-5ae3-4f2c-9213-4020663bef1d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:08:06 multinode-999945 crio[2851]: time="2024-07-29 14:08:06.799586527Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f6d636e1aea8bbaad8f6b3cc0d58a263ad94888138adf99321ac6b61ae0b2881,PodSandboxId:4d524146d248bf0bfbe2c84cc55afe99abd2a759e6645122bd6fb9ab641ca65e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722261880725502114,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cfbps,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43fabb3b-28df-4938-9d21-ab3d93cf1306,},Annotations:map[string]string{io.kubernetes.container.hash: a0cb0440,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e64607e004a7e42791b0b84ea6755935eab67c0d5a3a35786d38182fb119265,PodSandboxId:bb0504cc5d2292b3efea562425ae6b52b1614d96d3a41fe4dbff6990b04f0687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722261847113704184,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwhbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c736673-1a18-424e-b6f2-564730f5378a,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0c71f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c5e0236d7f68d78fd78f55bfe26f82d81dd2e04a72fd0ab7b8cd9c3cd3cddf,PodSandboxId:5da0160d771ed533e7e965c35a37aec87c005658c7b12421fe8c06230d672221,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722261847020717443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aeb8f386-5491-4271-8d95-19f1bd0cda53,},Annotations:map[string]string{io.kubernetes.container.hash: 72e1ea8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5909526fbedac88783dcdd44d167327d8b66dd7f71061b6fc65eb4ca85b54c,PodSandboxId:bb933458be8e89a5adcac9e1ce994885825935413140efb05c34cf95f40c03d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722261847053181907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-67wml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6e2780-35e8-4d82-8742-1ad45f71071a,},Annotations:map[string]string{io.kubernetes.container.hash: 32f9e042,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{
\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6567da46b50a844e0cdd358420b86bc9b98dd43dbd8a4a5ca1e466561e3d99f1,PodSandboxId:0daefb8dfb252256fc36ff466e68598feedcce373eb9d111de10e6f1697afc37,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722261847026630865,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cs48t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b81b754-2cfd-4fc9-ae72-f6c2efdf9796,},Annotations:map[string]string{io.ku
bernetes.container.hash: a5ba5322,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44edb91a2178ee2d2639cd7cd3fb28761d56b2ea41dcca8f2a0a27e78cd8b9d,PodSandboxId:7360f01f059aa74bdd9eaa2945894195a8e7c9fa7b51c9c0efc5380f64dc20f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722261843174915574,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e13418088b51ecca57f25f1e11293367,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a390b907d6854d6c4e0061661e8fb937a4f0b1fb1f717c74fd4e32eddcca8a69,PodSandboxId:8a2c1b0c32a3c855e932569a73bee843ffc1d5114c413258d98968733b456759,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722261843179692993,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08aac3f6b20c0cd36fd0badd541f987f,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05319a5585b937d3170f647cde7f87563c590fd62da79c9dc277b6c1fb0e45a4,PodSandboxId:148ec5d4cb38ca009ffad143c2b3f0970640707f0fbbd9be30f4b1328c153e69,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722261843136102046,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e577ab825fe54ad672236fa59e74a6e3,},Annotations:map[string]string{io.kubernetes.container.hash: 35f7a4ce,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee15e0cf532db32c9d562fa7725b8f8d2d2952cf04a265ec07f301714d817e2,PodSandboxId:f97677f35c33c6ebd44f000123a58de343578e750bf5d7aa7fa05004f970dcc4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722261843121767603,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0892d0efca377961238f900e9d91dfde,},Annotations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63b44f5b1f313073a03c94a584539e7b7c50ba3f490962a058b83d8587c99328,PodSandboxId:a61067b3c2d3233d868650b50c4edc1688a48b695f55384c83035ea06ccfda5c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722261526112822801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cfbps,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43fabb3b-28df-4938-9d21-ab3d93cf1306,},Annotations:map[string]string{io.kubernetes.container.hash: a0cb0440,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bac92b71b7328bee70f714a308bbd8458aaf8df96506296f679ebce41f04aeb1,PodSandboxId:e6da8569cf93eee3a36799778c1a3f85174ee8aefbc37b9455ba48123c511ea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722261473375862657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-67wml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6e2780-35e8-4d82-8742-1ad45f71071a,},Annotations:map[string]string{io.kubernetes.container.hash: 32f9e042,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:673291a0360a86e9ce27a8fb0c1488f3208c5b1ee6adc3f0333b5dd9e874fa03,PodSandboxId:c5ce6ef88b8fd3c6ffd80578fda318e772633c025c1c519c4087fddae258a380,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722261473376844788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: aeb8f386-5491-4271-8d95-19f1bd0cda53,},Annotations:map[string]string{io.kubernetes.container.hash: 72e1ea8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c0df3b19a4fbb788485f9a7433f0a56f9e3ef8785321f92b6742b431c4f1f0,PodSandboxId:affa2cf556c82f23fdc68e3563ba3be3e9de0da97985d30b6bb896b41d5d1430,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722261461450837773,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwhbt,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3c736673-1a18-424e-b6f2-564730f5378a,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0c71f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5897cf93acb9cea2fda1839684c4f12edfd20f2c53e557db0f2be0857f2b51ae,PodSandboxId:2ae0795b9aae115b9ec551ed1403ad19ec5382003137172c54b591b3a3f53466,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722261459327131087,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cs48t,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 9b81b754-2cfd-4fc9-ae72-f6c2efdf9796,},Annotations:map[string]string{io.kubernetes.container.hash: a5ba5322,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6f9a6db526f19329a886495225c588e7652d4037d74af31bb25dc7e71df226a,PodSandboxId:726b9bdc6ce0ee7ddecacc4fbf1f3e5a12190c9e62b1db2b84b633671b0bbd9a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722261439873228946,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e577ab825fe54ad672236fa59e74a6
e3,},Annotations:map[string]string{io.kubernetes.container.hash: 35f7a4ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d786c480dd875d1ece431db9ee5238921fe265e114808a871b24281d37e07f9,PodSandboxId:d32773a146357a0103a36007d27da5a11fbd8c4253b5a41db29d68f36811e6ec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722261439917826327,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e13418088b51ecca57f25f1e11293367,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8793a1ffb1b8951601b8a86ceecf1ba3258808adbd0292621591d7d6df9cda3a,PodSandboxId:13ad0064d67154fbb24ac98097855d62c6e1e1e1fedb7da5635796b97e8cbd2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722261439842926684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0892d0efca377961238f900e9d91dfde,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de1847d7078b112d019dd0cb882d00e39d73aced580db855f936d8c1bf9eba9,PodSandboxId:0a6da18c8cf516c6c2140913fcf4a1c304d8215e4c619dc40db84b9f776a4f89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722261439835285795,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08aac3f6b20c0cd36fd0badd541f987f,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=822011d0-5ae3-4f2c-9213-4020663bef1d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:08:06 multinode-999945 crio[2851]: time="2024-07-29 14:08:06.841533571Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a75f7c1a-26cf-41c1-a47a-1639cfa0df63 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:08:06 multinode-999945 crio[2851]: time="2024-07-29 14:08:06.841712899Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a75f7c1a-26cf-41c1-a47a-1639cfa0df63 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:08:06 multinode-999945 crio[2851]: time="2024-07-29 14:08:06.842689319Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a72cd9a9-3965-4edd-87f5-5ec791dc4ab1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:08:06 multinode-999945 crio[2851]: time="2024-07-29 14:08:06.843696271Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722262086843615920,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a72cd9a9-3965-4edd-87f5-5ec791dc4ab1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:08:06 multinode-999945 crio[2851]: time="2024-07-29 14:08:06.844233446Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a20c1155-dcfa-4f2d-827e-2458372704a9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:08:06 multinode-999945 crio[2851]: time="2024-07-29 14:08:06.844305320Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a20c1155-dcfa-4f2d-827e-2458372704a9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:08:06 multinode-999945 crio[2851]: time="2024-07-29 14:08:06.844642871Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f6d636e1aea8bbaad8f6b3cc0d58a263ad94888138adf99321ac6b61ae0b2881,PodSandboxId:4d524146d248bf0bfbe2c84cc55afe99abd2a759e6645122bd6fb9ab641ca65e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722261880725502114,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cfbps,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43fabb3b-28df-4938-9d21-ab3d93cf1306,},Annotations:map[string]string{io.kubernetes.container.hash: a0cb0440,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e64607e004a7e42791b0b84ea6755935eab67c0d5a3a35786d38182fb119265,PodSandboxId:bb0504cc5d2292b3efea562425ae6b52b1614d96d3a41fe4dbff6990b04f0687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722261847113704184,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwhbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c736673-1a18-424e-b6f2-564730f5378a,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0c71f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c5e0236d7f68d78fd78f55bfe26f82d81dd2e04a72fd0ab7b8cd9c3cd3cddf,PodSandboxId:5da0160d771ed533e7e965c35a37aec87c005658c7b12421fe8c06230d672221,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722261847020717443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aeb8f386-5491-4271-8d95-19f1bd0cda53,},Annotations:map[string]string{io.kubernetes.container.hash: 72e1ea8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5909526fbedac88783dcdd44d167327d8b66dd7f71061b6fc65eb4ca85b54c,PodSandboxId:bb933458be8e89a5adcac9e1ce994885825935413140efb05c34cf95f40c03d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722261847053181907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-67wml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6e2780-35e8-4d82-8742-1ad45f71071a,},Annotations:map[string]string{io.kubernetes.container.hash: 32f9e042,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{
\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6567da46b50a844e0cdd358420b86bc9b98dd43dbd8a4a5ca1e466561e3d99f1,PodSandboxId:0daefb8dfb252256fc36ff466e68598feedcce373eb9d111de10e6f1697afc37,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722261847026630865,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cs48t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b81b754-2cfd-4fc9-ae72-f6c2efdf9796,},Annotations:map[string]string{io.ku
bernetes.container.hash: a5ba5322,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44edb91a2178ee2d2639cd7cd3fb28761d56b2ea41dcca8f2a0a27e78cd8b9d,PodSandboxId:7360f01f059aa74bdd9eaa2945894195a8e7c9fa7b51c9c0efc5380f64dc20f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722261843174915574,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e13418088b51ecca57f25f1e11293367,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a390b907d6854d6c4e0061661e8fb937a4f0b1fb1f717c74fd4e32eddcca8a69,PodSandboxId:8a2c1b0c32a3c855e932569a73bee843ffc1d5114c413258d98968733b456759,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722261843179692993,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08aac3f6b20c0cd36fd0badd541f987f,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05319a5585b937d3170f647cde7f87563c590fd62da79c9dc277b6c1fb0e45a4,PodSandboxId:148ec5d4cb38ca009ffad143c2b3f0970640707f0fbbd9be30f4b1328c153e69,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722261843136102046,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e577ab825fe54ad672236fa59e74a6e3,},Annotations:map[string]string{io.kubernetes.container.hash: 35f7a4ce,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee15e0cf532db32c9d562fa7725b8f8d2d2952cf04a265ec07f301714d817e2,PodSandboxId:f97677f35c33c6ebd44f000123a58de343578e750bf5d7aa7fa05004f970dcc4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722261843121767603,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0892d0efca377961238f900e9d91dfde,},Annotations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63b44f5b1f313073a03c94a584539e7b7c50ba3f490962a058b83d8587c99328,PodSandboxId:a61067b3c2d3233d868650b50c4edc1688a48b695f55384c83035ea06ccfda5c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722261526112822801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cfbps,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43fabb3b-28df-4938-9d21-ab3d93cf1306,},Annotations:map[string]string{io.kubernetes.container.hash: a0cb0440,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bac92b71b7328bee70f714a308bbd8458aaf8df96506296f679ebce41f04aeb1,PodSandboxId:e6da8569cf93eee3a36799778c1a3f85174ee8aefbc37b9455ba48123c511ea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722261473375862657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-67wml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6e2780-35e8-4d82-8742-1ad45f71071a,},Annotations:map[string]string{io.kubernetes.container.hash: 32f9e042,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:673291a0360a86e9ce27a8fb0c1488f3208c5b1ee6adc3f0333b5dd9e874fa03,PodSandboxId:c5ce6ef88b8fd3c6ffd80578fda318e772633c025c1c519c4087fddae258a380,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722261473376844788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: aeb8f386-5491-4271-8d95-19f1bd0cda53,},Annotations:map[string]string{io.kubernetes.container.hash: 72e1ea8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c0df3b19a4fbb788485f9a7433f0a56f9e3ef8785321f92b6742b431c4f1f0,PodSandboxId:affa2cf556c82f23fdc68e3563ba3be3e9de0da97985d30b6bb896b41d5d1430,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722261461450837773,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwhbt,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3c736673-1a18-424e-b6f2-564730f5378a,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0c71f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5897cf93acb9cea2fda1839684c4f12edfd20f2c53e557db0f2be0857f2b51ae,PodSandboxId:2ae0795b9aae115b9ec551ed1403ad19ec5382003137172c54b591b3a3f53466,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722261459327131087,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cs48t,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 9b81b754-2cfd-4fc9-ae72-f6c2efdf9796,},Annotations:map[string]string{io.kubernetes.container.hash: a5ba5322,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6f9a6db526f19329a886495225c588e7652d4037d74af31bb25dc7e71df226a,PodSandboxId:726b9bdc6ce0ee7ddecacc4fbf1f3e5a12190c9e62b1db2b84b633671b0bbd9a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722261439873228946,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e577ab825fe54ad672236fa59e74a6
e3,},Annotations:map[string]string{io.kubernetes.container.hash: 35f7a4ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d786c480dd875d1ece431db9ee5238921fe265e114808a871b24281d37e07f9,PodSandboxId:d32773a146357a0103a36007d27da5a11fbd8c4253b5a41db29d68f36811e6ec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722261439917826327,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e13418088b51ecca57f25f1e11293367,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8793a1ffb1b8951601b8a86ceecf1ba3258808adbd0292621591d7d6df9cda3a,PodSandboxId:13ad0064d67154fbb24ac98097855d62c6e1e1e1fedb7da5635796b97e8cbd2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722261439842926684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0892d0efca377961238f900e9d91dfde,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de1847d7078b112d019dd0cb882d00e39d73aced580db855f936d8c1bf9eba9,PodSandboxId:0a6da18c8cf516c6c2140913fcf4a1c304d8215e4c619dc40db84b9f776a4f89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722261439835285795,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-999945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08aac3f6b20c0cd36fd0badd541f987f,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a20c1155-dcfa-4f2d-827e-2458372704a9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f6d636e1aea8b       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   4d524146d248b       busybox-fc5497c4f-cfbps
	5e64607e004a7       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      3 minutes ago       Running             kindnet-cni               1                   bb0504cc5d229       kindnet-lwhbt
	dd5909526fbed       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   bb933458be8e8       coredns-7db6d8ff4d-67wml
	6567da46b50a8       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      3 minutes ago       Running             kube-proxy                1                   0daefb8dfb252       kube-proxy-cs48t
	46c5e0236d7f6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   5da0160d771ed       storage-provisioner
	a390b907d6854       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   8a2c1b0c32a3c       kube-controller-manager-multinode-999945
	b44edb91a2178       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   7360f01f059aa       kube-scheduler-multinode-999945
	05319a5585b93       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   148ec5d4cb38c       etcd-multinode-999945
	9ee15e0cf532d       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   f97677f35c33c       kube-apiserver-multinode-999945
	63b44f5b1f313       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   a61067b3c2d32       busybox-fc5497c4f-cfbps
	673291a0360a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   c5ce6ef88b8fd       storage-provisioner
	bac92b71b7328       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   e6da8569cf93e       coredns-7db6d8ff4d-67wml
	59c0df3b19a4f       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    10 minutes ago      Exited              kindnet-cni               0                   affa2cf556c82       kindnet-lwhbt
	5897cf93acb9c       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   2ae0795b9aae1       kube-proxy-cs48t
	0d786c480dd87       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      10 minutes ago      Exited              kube-scheduler            0                   d32773a146357       kube-scheduler-multinode-999945
	b6f9a6db526f1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   726b9bdc6ce0e       etcd-multinode-999945
	8793a1ffb1b89       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      10 minutes ago      Exited              kube-apiserver            0                   13ad0064d6715       kube-apiserver-multinode-999945
	5de1847d7078b       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      10 minutes ago      Exited              kube-controller-manager   0                   0a6da18c8cf51       kube-controller-manager-multinode-999945
	
	
	==> coredns [bac92b71b7328bee70f714a308bbd8458aaf8df96506296f679ebce41f04aeb1] <==
	[INFO] 10.244.1.2:41439 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001776374s
	[INFO] 10.244.1.2:40510 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130581s
	[INFO] 10.244.1.2:47791 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000095775s
	[INFO] 10.244.1.2:53005 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001249471s
	[INFO] 10.244.1.2:40177 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106319s
	[INFO] 10.244.1.2:38003 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000152364s
	[INFO] 10.244.1.2:34702 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061771s
	[INFO] 10.244.0.3:54854 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129225s
	[INFO] 10.244.0.3:57174 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145125s
	[INFO] 10.244.0.3:56250 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000050171s
	[INFO] 10.244.0.3:46830 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096578s
	[INFO] 10.244.1.2:33812 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135562s
	[INFO] 10.244.1.2:33198 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100874s
	[INFO] 10.244.1.2:43223 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111144s
	[INFO] 10.244.1.2:48727 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065837s
	[INFO] 10.244.0.3:55357 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000090527s
	[INFO] 10.244.0.3:52260 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000125398s
	[INFO] 10.244.0.3:59507 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000103114s
	[INFO] 10.244.0.3:43384 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00007954s
	[INFO] 10.244.1.2:34845 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120083s
	[INFO] 10.244.1.2:41200 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000077641s
	[INFO] 10.244.1.2:34736 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000215047s
	[INFO] 10.244.1.2:43686 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000160032s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [dd5909526fbedac88783dcdd44d167327d8b66dd7f71061b6fc65eb4ca85b54c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56654 - 38647 "HINFO IN 2625830199519297080.4413156860955595871. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013297454s
	
	
	==> describe nodes <==
	Name:               multinode-999945
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-999945
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=multinode-999945
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T13_57_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 13:57:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-999945
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 14:08:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 14:04:06 +0000   Mon, 29 Jul 2024 13:57:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 14:04:06 +0000   Mon, 29 Jul 2024 13:57:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 14:04:06 +0000   Mon, 29 Jul 2024 13:57:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 14:04:06 +0000   Mon, 29 Jul 2024 13:57:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.69
	  Hostname:    multinode-999945
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 eeec9bdcc1524e7da0bdaa5dbe13ee4f
	  System UUID:                eeec9bdc-c152-4e7d-a0bd-aa5dbe13ee4f
	  Boot ID:                    0863024b-4695-4eef-a6fe-b126a667817e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cfbps                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                 coredns-7db6d8ff4d-67wml                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-999945                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-lwhbt                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-999945             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-999945    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-cs48t                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-999945             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 3m59s                kube-proxy       
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-999945 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-999945 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-999945 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-999945 event: Registered Node multinode-999945 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-999945 status is now: NodeReady
	  Normal  Starting                 4m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m5s (x8 over 4m5s)  kubelet          Node multinode-999945 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m5s (x8 over 4m5s)  kubelet          Node multinode-999945 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m5s (x7 over 4m5s)  kubelet          Node multinode-999945 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m49s                node-controller  Node multinode-999945 event: Registered Node multinode-999945 in Controller
	
	
	Name:               multinode-999945-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-999945-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=multinode-999945
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T14_04_46_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 14:04:45 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-999945-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 14:05:46 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 14:05:15 +0000   Mon, 29 Jul 2024 14:06:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 14:05:15 +0000   Mon, 29 Jul 2024 14:06:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 14:05:15 +0000   Mon, 29 Jul 2024 14:06:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 14:05:15 +0000   Mon, 29 Jul 2024 14:06:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.130
	  Hostname:    multinode-999945-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 94292164340a480f8ca8a62dd0a5c6d9
	  System UUID:                94292164-340a-480f-8ca8-a62dd0a5c6d9
	  Boot ID:                    b0b6710f-522e-4441-ab10-1ab5beb4c6cf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-r6skw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 kindnet-76rsw              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m44s
	  kube-system                 kube-proxy-bdwfd           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m17s                  kube-proxy       
	  Normal  Starting                 9m37s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m44s (x2 over 9m44s)  kubelet          Node multinode-999945-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m44s (x2 over 9m44s)  kubelet          Node multinode-999945-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m44s (x2 over 9m44s)  kubelet          Node multinode-999945-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m25s                  kubelet          Node multinode-999945-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m22s (x2 over 3m22s)  kubelet          Node multinode-999945-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m22s (x2 over 3m22s)  kubelet          Node multinode-999945-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m22s (x2 over 3m22s)  kubelet          Node multinode-999945-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m4s                   kubelet          Node multinode-999945-m02 status is now: NodeReady
	  Normal  NodeNotReady             99s                    node-controller  Node multinode-999945-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.055548] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053697] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.172607] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.129669] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.262305] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.142592] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +4.372244] systemd-fstab-generator[933]: Ignoring "noauto" option for root device
	[  +0.058325] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.979765] systemd-fstab-generator[1263]: Ignoring "noauto" option for root device
	[  +0.102938] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.228904] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.987218] systemd-fstab-generator[1462]: Ignoring "noauto" option for root device
	[ +14.287225] kauditd_printk_skb: 60 callbacks suppressed
	[Jul29 13:58] kauditd_printk_skb: 12 callbacks suppressed
	[Jul29 14:03] systemd-fstab-generator[2769]: Ignoring "noauto" option for root device
	[  +0.135871] systemd-fstab-generator[2781]: Ignoring "noauto" option for root device
	[  +0.161470] systemd-fstab-generator[2795]: Ignoring "noauto" option for root device
	[  +0.149360] systemd-fstab-generator[2807]: Ignoring "noauto" option for root device
	[  +0.278023] systemd-fstab-generator[2835]: Ignoring "noauto" option for root device
	[  +0.709644] systemd-fstab-generator[2934]: Ignoring "noauto" option for root device
	[Jul29 14:04] systemd-fstab-generator[3059]: Ignoring "noauto" option for root device
	[  +4.681650] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.841461] kauditd_printk_skb: 32 callbacks suppressed
	[  +2.631841] systemd-fstab-generator[3891]: Ignoring "noauto" option for root device
	[ +19.281162] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [05319a5585b937d3170f647cde7f87563c590fd62da79c9dc277b6c1fb0e45a4] <==
	{"level":"info","ts":"2024-07-29T14:04:03.629407Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T14:04:03.629458Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T14:04:03.629467Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T14:04:03.629702Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.69:2380"}
	{"level":"info","ts":"2024-07-29T14:04:03.629729Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.69:2380"}
	{"level":"info","ts":"2024-07-29T14:04:03.632858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b switched to configuration voters=(10491453631398908315)"}
	{"level":"info","ts":"2024-07-29T14:04:03.63294Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6c21f62219c1156b","local-member-id":"9199217ddd03919b","added-peer-id":"9199217ddd03919b","added-peer-peer-urls":["https://192.168.39.69:2380"]}
	{"level":"info","ts":"2024-07-29T14:04:03.63574Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6c21f62219c1156b","local-member-id":"9199217ddd03919b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:04:03.635787Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:04:04.941099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T14:04:04.941224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T14:04:04.941274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b received MsgPreVoteResp from 9199217ddd03919b at term 2"}
	{"level":"info","ts":"2024-07-29T14:04:04.94131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T14:04:04.941334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b received MsgVoteResp from 9199217ddd03919b at term 3"}
	{"level":"info","ts":"2024-07-29T14:04:04.941361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b became leader at term 3"}
	{"level":"info","ts":"2024-07-29T14:04:04.94139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9199217ddd03919b elected leader 9199217ddd03919b at term 3"}
	{"level":"info","ts":"2024-07-29T14:04:04.952593Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"9199217ddd03919b","local-member-attributes":"{Name:multinode-999945 ClientURLs:[https://192.168.39.69:2379]}","request-path":"/0/members/9199217ddd03919b/attributes","cluster-id":"6c21f62219c1156b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T14:04:04.953086Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T14:04:04.953186Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T14:04:04.953228Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T14:04:04.953261Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T14:04:04.960506Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T14:04:04.970849Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.69:2379"}
	{"level":"info","ts":"2024-07-29T14:04:49.465115Z","caller":"traceutil/trace.go:171","msg":"trace[1930564265] transaction","detail":"{read_only:false; response_revision:1030; number_of_response:1; }","duration":"188.512868ms","start":"2024-07-29T14:04:49.276575Z","end":"2024-07-29T14:04:49.465088Z","steps":["trace[1930564265] 'process raft request'  (duration: 188.352905ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T14:05:27.255884Z","caller":"traceutil/trace.go:171","msg":"trace[2131238968] transaction","detail":"{read_only:false; response_revision:1119; number_of_response:1; }","duration":"215.509076ms","start":"2024-07-29T14:05:27.04033Z","end":"2024-07-29T14:05:27.255839Z","steps":["trace[2131238968] 'process raft request'  (duration: 215.306774ms)"],"step_count":1}
	
	
	==> etcd [b6f9a6db526f19329a886495225c588e7652d4037d74af31bb25dc7e71df226a] <==
	{"level":"info","ts":"2024-07-29T13:57:20.530331Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T13:57:20.530733Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T13:57:20.530762Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T13:58:23.741798Z","caller":"traceutil/trace.go:171","msg":"trace[1892503512] transaction","detail":"{read_only:false; response_revision:441; number_of_response:1; }","duration":"233.248257ms","start":"2024-07-29T13:58:23.50851Z","end":"2024-07-29T13:58:23.741758Z","steps":["trace[1892503512] 'process raft request'  (duration: 227.598525ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T13:58:23.742079Z","caller":"traceutil/trace.go:171","msg":"trace[294383329] linearizableReadLoop","detail":"{readStateIndex:463; appliedIndex:462; }","duration":"224.416622ms","start":"2024-07-29T13:58:23.517642Z","end":"2024-07-29T13:58:23.742058Z","steps":["trace[294383329] 'read index received'  (duration: 218.467608ms)","trace[294383329] 'applied index is now lower than readState.Index'  (duration: 5.947668ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T13:58:23.74226Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.57129ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-999945-m02\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-29T13:58:23.742628Z","caller":"traceutil/trace.go:171","msg":"trace[275719880] range","detail":"{range_begin:/registry/minions/multinode-999945-m02; range_end:; response_count:1; response_revision:443; }","duration":"225.023717ms","start":"2024-07-29T13:58:23.517597Z","end":"2024-07-29T13:58:23.74262Z","steps":["trace[275719880] 'agreement among raft nodes before linearized reading'  (duration: 224.540679ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T13:58:23.742395Z","caller":"traceutil/trace.go:171","msg":"trace[92227388] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"169.614152ms","start":"2024-07-29T13:58:23.572774Z","end":"2024-07-29T13:58:23.742388Z","steps":["trace[92227388] 'process raft request'  (duration: 168.73098ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T13:58:28.76072Z","caller":"traceutil/trace.go:171","msg":"trace[486527720] transaction","detail":"{read_only:false; response_revision:475; number_of_response:1; }","duration":"180.021158ms","start":"2024-07-29T13:58:28.58068Z","end":"2024-07-29T13:58:28.760701Z","steps":["trace[486527720] 'process raft request'  (duration: 179.926899ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T13:58:28.781381Z","caller":"traceutil/trace.go:171","msg":"trace[1347007931] transaction","detail":"{read_only:false; response_revision:476; number_of_response:1; }","duration":"193.959422ms","start":"2024-07-29T13:58:28.587408Z","end":"2024-07-29T13:58:28.781367Z","steps":["trace[1347007931] 'process raft request'  (duration: 193.748753ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T13:59:17.209697Z","caller":"traceutil/trace.go:171","msg":"trace[556575040] linearizableReadLoop","detail":"{readStateIndex:612; appliedIndex:610; }","duration":"114.926355ms","start":"2024-07-29T13:59:17.094735Z","end":"2024-07-29T13:59:17.209661Z","steps":["trace[556575040] 'read index received'  (duration: 84.458398ms)","trace[556575040] 'applied index is now lower than readState.Index'  (duration: 30.467511ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T13:59:17.209867Z","caller":"traceutil/trace.go:171","msg":"trace[1374024925] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"164.190521ms","start":"2024-07-29T13:59:17.045665Z","end":"2024-07-29T13:59:17.209855Z","steps":["trace[1374024925] 'process raft request'  (duration: 163.963259ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T13:59:17.210094Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.319587ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-999945-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-29T13:59:17.210148Z","caller":"traceutil/trace.go:171","msg":"trace[1725965248] range","detail":"{range_begin:/registry/minions/multinode-999945-m03; range_end:; response_count:1; response_revision:577; }","duration":"115.424833ms","start":"2024-07-29T13:59:17.094711Z","end":"2024-07-29T13:59:17.210136Z","steps":["trace[1725965248] 'agreement among raft nodes before linearized reading'  (duration: 115.245334ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T13:59:17.210421Z","caller":"traceutil/trace.go:171","msg":"trace[1320330072] transaction","detail":"{read_only:false; response_revision:576; number_of_response:1; }","duration":"248.835244ms","start":"2024-07-29T13:59:16.961575Z","end":"2024-07-29T13:59:17.21041Z","steps":["trace[1320330072] 'process raft request'  (duration: 217.609584ms)","trace[1320330072] 'compare'  (duration: 30.303689ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T14:02:27.505937Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T14:02:27.506112Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-999945","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.69:2380"],"advertise-client-urls":["https://192.168.39.69:2379"]}
	{"level":"warn","ts":"2024-07-29T14:02:27.50627Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T14:02:27.50639Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T14:02:27.547865Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.69:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T14:02:27.547947Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.69:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T14:02:27.549426Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9199217ddd03919b","current-leader-member-id":"9199217ddd03919b"}
	{"level":"info","ts":"2024-07-29T14:02:27.552402Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.69:2380"}
	{"level":"info","ts":"2024-07-29T14:02:27.552575Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.69:2380"}
	{"level":"info","ts":"2024-07-29T14:02:27.552605Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-999945","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.69:2380"],"advertise-client-urls":["https://192.168.39.69:2379"]}
	
	
	==> kernel <==
	 14:08:07 up 11 min,  0 users,  load average: 0.12, 0.25, 0.14
	Linux multinode-999945 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [59c0df3b19a4fbb788485f9a7433f0a56f9e3ef8785321f92b6742b431c4f1f0] <==
	I0729 14:01:42.567392       1 main.go:322] Node multinode-999945-m03 has CIDR [10.244.3.0/24] 
	I0729 14:01:52.571686       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 14:01:52.571744       1 main.go:299] handling current node
	I0729 14:01:52.571760       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0729 14:01:52.571765       1 main.go:322] Node multinode-999945-m02 has CIDR [10.244.1.0/24] 
	I0729 14:01:52.571901       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 14:01:52.571925       1 main.go:322] Node multinode-999945-m03 has CIDR [10.244.3.0/24] 
	I0729 14:02:02.575349       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 14:02:02.575401       1 main.go:299] handling current node
	I0729 14:02:02.575420       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0729 14:02:02.575426       1 main.go:322] Node multinode-999945-m02 has CIDR [10.244.1.0/24] 
	I0729 14:02:02.575604       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 14:02:02.575627       1 main.go:322] Node multinode-999945-m03 has CIDR [10.244.3.0/24] 
	I0729 14:02:12.566838       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 14:02:12.566917       1 main.go:299] handling current node
	I0729 14:02:12.566935       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0729 14:02:12.566941       1 main.go:322] Node multinode-999945-m02 has CIDR [10.244.1.0/24] 
	I0729 14:02:12.567199       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 14:02:12.567234       1 main.go:322] Node multinode-999945-m03 has CIDR [10.244.3.0/24] 
	I0729 14:02:22.572181       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0729 14:02:22.572274       1 main.go:322] Node multinode-999945-m03 has CIDR [10.244.3.0/24] 
	I0729 14:02:22.572436       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 14:02:22.572463       1 main.go:299] handling current node
	I0729 14:02:22.572486       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0729 14:02:22.572501       1 main.go:322] Node multinode-999945-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [5e64607e004a7e42791b0b84ea6755935eab67c0d5a3a35786d38182fb119265] <==
	I0729 14:06:58.074651       1 main.go:322] Node multinode-999945-m02 has CIDR [10.244.1.0/24] 
	I0729 14:07:08.069811       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 14:07:08.069866       1 main.go:299] handling current node
	I0729 14:07:08.069892       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0729 14:07:08.069901       1 main.go:322] Node multinode-999945-m02 has CIDR [10.244.1.0/24] 
	I0729 14:07:18.078481       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 14:07:18.078586       1 main.go:299] handling current node
	I0729 14:07:18.078616       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0729 14:07:18.078635       1 main.go:322] Node multinode-999945-m02 has CIDR [10.244.1.0/24] 
	I0729 14:07:28.076458       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 14:07:28.076483       1 main.go:299] handling current node
	I0729 14:07:28.076496       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0729 14:07:28.076502       1 main.go:322] Node multinode-999945-m02 has CIDR [10.244.1.0/24] 
	I0729 14:07:38.070081       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0729 14:07:38.070132       1 main.go:322] Node multinode-999945-m02 has CIDR [10.244.1.0/24] 
	I0729 14:07:38.070287       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 14:07:38.070315       1 main.go:299] handling current node
	I0729 14:07:48.080283       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0729 14:07:48.080834       1 main.go:322] Node multinode-999945-m02 has CIDR [10.244.1.0/24] 
	I0729 14:07:48.081253       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 14:07:48.081268       1 main.go:299] handling current node
	I0729 14:07:58.072570       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 14:07:58.072662       1 main.go:299] handling current node
	I0729 14:07:58.072693       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0729 14:07:58.072742       1 main.go:322] Node multinode-999945-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [8793a1ffb1b8951601b8a86ceecf1ba3258808adbd0292621591d7d6df9cda3a] <==
	W0729 14:02:27.528624       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.528705       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.528755       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.528784       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.528811       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.528837       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.528864       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.528891       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.528920       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.528946       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.528975       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.529064       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.529104       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.529133       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.529161       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.529565       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.529596       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.529622       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.529831       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.529861       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.530308       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.530346       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.530443       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.530544       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:02:27.530625       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [9ee15e0cf532db32c9d562fa7725b8f8d2d2952cf04a265ec07f301714d817e2] <==
	I0729 14:04:06.429816       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 14:04:06.429936       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 14:04:06.491315       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 14:04:06.498105       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 14:04:06.498145       1 policy_source.go:224] refreshing policies
	I0729 14:04:06.527376       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 14:04:06.528618       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 14:04:06.528706       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 14:04:06.529151       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 14:04:06.529451       1 aggregator.go:165] initial CRD sync complete...
	I0729 14:04:06.529493       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 14:04:06.529516       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 14:04:06.529538       1 cache.go:39] Caches are synced for autoregister controller
	I0729 14:04:06.530077       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 14:04:06.544880       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 14:04:06.549625       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0729 14:04:06.568591       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 14:04:07.352456       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 14:04:08.354738       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 14:04:08.497727       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 14:04:08.521654       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 14:04:08.599682       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 14:04:08.607305       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 14:04:18.708581       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 14:04:18.733665       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [5de1847d7078b112d019dd0cb882d00e39d73aced580db855f936d8c1bf9eba9] <==
	I0729 13:58:23.792324       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-999945-m02" podCIDRs=["10.244.1.0/24"]
	I0729 13:58:28.578692       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-999945-m02"
	I0729 13:58:42.441382       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-999945-m02"
	I0729 13:58:44.824913       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.60669ms"
	I0729 13:58:44.835254       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.095696ms"
	I0729 13:58:44.835714       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="130.439µs"
	I0729 13:58:44.839676       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.436µs"
	I0729 13:58:44.844458       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.491µs"
	I0729 13:58:46.313413       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.335342ms"
	I0729 13:58:46.313551       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.378µs"
	I0729 13:58:46.588359       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.958301ms"
	I0729 13:58:46.588594       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.041µs"
	I0729 13:59:17.213957       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-999945-m02"
	I0729 13:59:17.214748       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-999945-m03\" does not exist"
	I0729 13:59:17.252884       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-999945-m03" podCIDRs=["10.244.2.0/24"]
	I0729 13:59:18.599954       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-999945-m03"
	I0729 13:59:35.396598       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-999945-m02"
	I0729 14:00:02.936085       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-999945-m02"
	I0729 14:00:03.851204       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-999945-m02"
	I0729 14:00:03.851310       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-999945-m03\" does not exist"
	I0729 14:00:03.858071       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-999945-m03" podCIDRs=["10.244.3.0/24"]
	I0729 14:00:21.767330       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-999945-m02"
	I0729 14:01:08.667889       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-999945-m02"
	I0729 14:01:08.729656       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.479687ms"
	I0729 14:01:08.730340       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.738µs"
	
	
	==> kube-controller-manager [a390b907d6854d6c4e0061661e8fb937a4f0b1fb1f717c74fd4e32eddcca8a69] <==
	I0729 14:04:45.238383       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-999945-m02" podCIDRs=["10.244.1.0/24"]
	I0729 14:04:47.129624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.91µs"
	I0729 14:04:47.171330       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.089µs"
	I0729 14:04:47.178576       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.334µs"
	I0729 14:04:47.206568       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.998µs"
	I0729 14:04:47.215974       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.785µs"
	I0729 14:04:47.218924       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="95.218µs"
	I0729 14:04:49.469121       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="206.68µs"
	I0729 14:05:03.248454       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-999945-m02"
	I0729 14:05:03.270985       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.023µs"
	I0729 14:05:03.282663       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.297µs"
	I0729 14:05:05.783228       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.718176ms"
	I0729 14:05:05.783371       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.534µs"
	I0729 14:05:21.708369       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-999945-m02"
	I0729 14:05:23.069578       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-999945-m03\" does not exist"
	I0729 14:05:23.069807       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-999945-m02"
	I0729 14:05:23.080468       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-999945-m03" podCIDRs=["10.244.2.0/24"]
	I0729 14:05:40.475949       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-999945-m02"
	I0729 14:05:45.939977       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-999945-m02"
	I0729 14:06:28.928649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.43833ms"
	I0729 14:06:28.930139       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.751µs"
	I0729 14:06:38.757424       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-dpx6f"
	I0729 14:06:38.789408       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-dpx6f"
	I0729 14:06:38.789486       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-wc8pr"
	I0729 14:06:38.810468       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-wc8pr"
	
	
	==> kube-proxy [5897cf93acb9cea2fda1839684c4f12edfd20f2c53e557db0f2be0857f2b51ae] <==
	I0729 13:57:39.827518       1 server_linux.go:69] "Using iptables proxy"
	I0729 13:57:39.841951       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.69"]
	I0729 13:57:39.888927       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 13:57:39.888964       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 13:57:39.888979       1 server_linux.go:165] "Using iptables Proxier"
	I0729 13:57:39.892399       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 13:57:39.892638       1 server.go:872] "Version info" version="v1.30.3"
	I0729 13:57:39.892681       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 13:57:39.894982       1 config.go:192] "Starting service config controller"
	I0729 13:57:39.895362       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 13:57:39.895427       1 config.go:101] "Starting endpoint slice config controller"
	I0729 13:57:39.895449       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 13:57:39.896675       1 config.go:319] "Starting node config controller"
	I0729 13:57:39.896712       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 13:57:39.996051       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 13:57:39.996105       1 shared_informer.go:320] Caches are synced for service config
	I0729 13:57:39.996800       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [6567da46b50a844e0cdd358420b86bc9b98dd43dbd8a4a5ca1e466561e3d99f1] <==
	I0729 14:04:07.267715       1 server_linux.go:69] "Using iptables proxy"
	I0729 14:04:07.287759       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.69"]
	I0729 14:04:07.352694       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 14:04:07.352755       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 14:04:07.352773       1 server_linux.go:165] "Using iptables Proxier"
	I0729 14:04:07.359409       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 14:04:07.359609       1 server.go:872] "Version info" version="v1.30.3"
	I0729 14:04:07.359638       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 14:04:07.361185       1 config.go:192] "Starting service config controller"
	I0729 14:04:07.361227       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 14:04:07.361254       1 config.go:101] "Starting endpoint slice config controller"
	I0729 14:04:07.361258       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 14:04:07.361748       1 config.go:319] "Starting node config controller"
	I0729 14:04:07.364481       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 14:04:07.462211       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 14:04:07.462279       1 shared_informer.go:320] Caches are synced for service config
	I0729 14:04:07.464716       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0d786c480dd875d1ece431db9ee5238921fe265e114808a871b24281d37e07f9] <==
	E0729 13:57:22.408916       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 13:57:22.407956       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 13:57:22.408156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 13:57:22.408163       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 13:57:23.231590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 13:57:23.231638       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 13:57:23.311669       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 13:57:23.311792       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 13:57:23.390328       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 13:57:23.390494       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 13:57:23.491767       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 13:57:23.491826       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 13:57:23.506218       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 13:57:23.506339       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 13:57:23.511151       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 13:57:23.511332       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 13:57:23.637810       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 13:57:23.638963       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 13:57:23.638775       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 13:57:23.639228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 13:57:23.951635       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 13:57:23.951754       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 13:57:27.101693       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 14:02:27.501839       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0729 14:02:27.502563       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b44edb91a2178ee2d2639cd7cd3fb28761d56b2ea41dcca8f2a0a27e78cd8b9d] <==
	I0729 14:04:04.662664       1 serving.go:380] Generated self-signed cert in-memory
	W0729 14:04:06.385699       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 14:04:06.385801       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 14:04:06.385826       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 14:04:06.385832       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 14:04:06.444850       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 14:04:06.445289       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 14:04:06.451811       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 14:04:06.452061       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 14:04:06.453732       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 14:04:06.453800       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 14:04:06.553148       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 14:04:06 multinode-999945 kubelet[3066]: I0729 14:04:06.541853    3066 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b81b754-2cfd-4fc9-ae72-f6c2efdf9796-xtables-lock\") pod \"kube-proxy-cs48t\" (UID: \"9b81b754-2cfd-4fc9-ae72-f6c2efdf9796\") " pod="kube-system/kube-proxy-cs48t"
	Jul 29 14:04:06 multinode-999945 kubelet[3066]: I0729 14:04:06.541910    3066 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c736673-1a18-424e-b6f2-564730f5378a-xtables-lock\") pod \"kindnet-lwhbt\" (UID: \"3c736673-1a18-424e-b6f2-564730f5378a\") " pod="kube-system/kindnet-lwhbt"
	Jul 29 14:04:06 multinode-999945 kubelet[3066]: I0729 14:04:06.541967    3066 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c736673-1a18-424e-b6f2-564730f5378a-lib-modules\") pod \"kindnet-lwhbt\" (UID: \"3c736673-1a18-424e-b6f2-564730f5378a\") " pod="kube-system/kindnet-lwhbt"
	Jul 29 14:04:06 multinode-999945 kubelet[3066]: I0729 14:04:06.542063    3066 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/aeb8f386-5491-4271-8d95-19f1bd0cda53-tmp\") pod \"storage-provisioner\" (UID: \"aeb8f386-5491-4271-8d95-19f1bd0cda53\") " pod="kube-system/storage-provisioner"
	Jul 29 14:04:09 multinode-999945 kubelet[3066]: I0729 14:04:09.914898    3066 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 29 14:05:02 multinode-999945 kubelet[3066]: E0729 14:05:02.524381    3066 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 14:05:02 multinode-999945 kubelet[3066]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 14:05:02 multinode-999945 kubelet[3066]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 14:05:02 multinode-999945 kubelet[3066]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 14:05:02 multinode-999945 kubelet[3066]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 14:06:02 multinode-999945 kubelet[3066]: E0729 14:06:02.523972    3066 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 14:06:02 multinode-999945 kubelet[3066]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 14:06:02 multinode-999945 kubelet[3066]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 14:06:02 multinode-999945 kubelet[3066]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 14:06:02 multinode-999945 kubelet[3066]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 14:07:02 multinode-999945 kubelet[3066]: E0729 14:07:02.524292    3066 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 14:07:02 multinode-999945 kubelet[3066]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 14:07:02 multinode-999945 kubelet[3066]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 14:07:02 multinode-999945 kubelet[3066]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 14:07:02 multinode-999945 kubelet[3066]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 14:08:02 multinode-999945 kubelet[3066]: E0729 14:08:02.527244    3066 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 14:08:02 multinode-999945 kubelet[3066]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 14:08:02 multinode-999945 kubelet[3066]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 14:08:02 multinode-999945 kubelet[3066]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 14:08:02 multinode-999945 kubelet[3066]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 14:08:06.429946 1012574 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19338-974764/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-999945 -n multinode-999945
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-999945 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.29s)

                                                
                                    
x
+
TestPreload (277.42s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-690943 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0729 14:12:06.665711  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
E0729 14:14:13.712360  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-690943 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m16.534844002s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-690943 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-690943 image pull gcr.io/k8s-minikube/busybox: (1.074540851s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-690943
E0729 14:14:30.665920  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-690943: exit status 82 (2m0.458096135s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-690943"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-690943 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-07-29 14:16:15.431556116 +0000 UTC m=+3861.839279886
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-690943 -n test-preload-690943
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-690943 -n test-preload-690943: exit status 3 (18.561605342s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 14:16:33.988791 1015375 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.53:22: connect: no route to host
	E0729 14:16:33.988815 1015375 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.53:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-690943" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-690943" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-690943
--- FAIL: TestPreload (277.42s)

                                                
                                    
x
+
TestKubernetesUpgrade (464.32s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-050658 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-050658 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m34.025066208s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-050658] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19338
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-050658" primary control-plane node in "kubernetes-upgrade-050658" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 14:18:33.419121 1019233 out.go:291] Setting OutFile to fd 1 ...
	I0729 14:18:33.419262 1019233 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:18:33.419273 1019233 out.go:304] Setting ErrFile to fd 2...
	I0729 14:18:33.419280 1019233 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:18:33.419612 1019233 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 14:18:33.420455 1019233 out.go:298] Setting JSON to false
	I0729 14:18:33.421873 1019233 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":14465,"bootTime":1722248248,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 14:18:33.421959 1019233 start.go:139] virtualization: kvm guest
	I0729 14:18:33.424488 1019233 out.go:177] * [kubernetes-upgrade-050658] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 14:18:33.426232 1019233 notify.go:220] Checking for updates...
	I0729 14:18:33.426303 1019233 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 14:18:33.427755 1019233 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 14:18:33.429258 1019233 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:18:33.430652 1019233 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:18:33.432245 1019233 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 14:18:33.433661 1019233 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 14:18:33.435769 1019233 config.go:182] Loaded profile config "NoKubernetes-721916": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:18:33.435924 1019233 config.go:182] Loaded profile config "force-systemd-env-764732": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:18:33.436063 1019233 config.go:182] Loaded profile config "offline-crio-715623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:18:33.436177 1019233 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 14:18:33.469492 1019233 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 14:18:33.470919 1019233 start.go:297] selected driver: kvm2
	I0729 14:18:33.470942 1019233 start.go:901] validating driver "kvm2" against <nil>
	I0729 14:18:33.470957 1019233 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 14:18:33.472030 1019233 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:18:33.472131 1019233 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19338-974764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 14:18:33.487633 1019233 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 14:18:33.487689 1019233 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 14:18:33.487983 1019233 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 14:18:33.488051 1019233 cni.go:84] Creating CNI manager for ""
	I0729 14:18:33.488069 1019233 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:18:33.488090 1019233 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 14:18:33.488164 1019233 start.go:340] cluster config:
	{Name:kubernetes-upgrade-050658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-050658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:18:33.488310 1019233 iso.go:125] acquiring lock: {Name:mk2bc72146110e230952d77b90cad2ea8182c9d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:18:33.491021 1019233 out.go:177] * Starting "kubernetes-upgrade-050658" primary control-plane node in "kubernetes-upgrade-050658" cluster
	I0729 14:18:33.492464 1019233 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 14:18:33.492512 1019233 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 14:18:33.492525 1019233 cache.go:56] Caching tarball of preloaded images
	I0729 14:18:33.492631 1019233 preload.go:172] Found /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 14:18:33.492646 1019233 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 14:18:33.492790 1019233 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/config.json ...
	I0729 14:18:33.492829 1019233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/config.json: {Name:mkb91829b4f80a9adcba38b2172450d7bc077df8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:18:33.493009 1019233 start.go:360] acquireMachinesLock for kubernetes-upgrade-050658: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 14:19:36.309219 1019233 start.go:364] duration metric: took 1m2.816161468s to acquireMachinesLock for "kubernetes-upgrade-050658"
	I0729 14:19:36.309308 1019233 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-050658 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-050658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:19:36.309411 1019233 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 14:19:36.311566 1019233 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 14:19:36.311769 1019233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:19:36.311817 1019233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:19:36.328355 1019233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42683
	I0729 14:19:36.328858 1019233 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:19:36.329480 1019233 main.go:141] libmachine: Using API Version  1
	I0729 14:19:36.329500 1019233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:19:36.329912 1019233 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:19:36.330125 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetMachineName
	I0729 14:19:36.330328 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .DriverName
	I0729 14:19:36.330506 1019233 start.go:159] libmachine.API.Create for "kubernetes-upgrade-050658" (driver="kvm2")
	I0729 14:19:36.330536 1019233 client.go:168] LocalClient.Create starting
	I0729 14:19:36.330567 1019233 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem
	I0729 14:19:36.330604 1019233 main.go:141] libmachine: Decoding PEM data...
	I0729 14:19:36.330631 1019233 main.go:141] libmachine: Parsing certificate...
	I0729 14:19:36.330703 1019233 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem
	I0729 14:19:36.330727 1019233 main.go:141] libmachine: Decoding PEM data...
	I0729 14:19:36.330749 1019233 main.go:141] libmachine: Parsing certificate...
	I0729 14:19:36.330773 1019233 main.go:141] libmachine: Running pre-create checks...
	I0729 14:19:36.330785 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .PreCreateCheck
	I0729 14:19:36.331172 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetConfigRaw
	I0729 14:19:36.331644 1019233 main.go:141] libmachine: Creating machine...
	I0729 14:19:36.331663 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .Create
	I0729 14:19:36.331839 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Creating KVM machine...
	I0729 14:19:36.333011 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | found existing default KVM network
	I0729 14:19:36.334207 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | I0729 14:19:36.334049 1019939 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ec0}
	I0729 14:19:36.334239 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | created network xml: 
	I0729 14:19:36.334252 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | <network>
	I0729 14:19:36.334261 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG |   <name>mk-kubernetes-upgrade-050658</name>
	I0729 14:19:36.334275 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG |   <dns enable='no'/>
	I0729 14:19:36.334285 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG |   
	I0729 14:19:36.334295 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 14:19:36.334303 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG |     <dhcp>
	I0729 14:19:36.334333 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 14:19:36.334353 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG |     </dhcp>
	I0729 14:19:36.334364 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG |   </ip>
	I0729 14:19:36.334372 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG |   
	I0729 14:19:36.334388 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | </network>
	I0729 14:19:36.334394 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | 
	I0729 14:19:36.339716 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | trying to create private KVM network mk-kubernetes-upgrade-050658 192.168.39.0/24...
	I0729 14:19:36.416547 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | private KVM network mk-kubernetes-upgrade-050658 192.168.39.0/24 created
	I0729 14:19:36.416587 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | I0729 14:19:36.416490 1019939 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:19:36.416604 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Setting up store path in /home/jenkins/minikube-integration/19338-974764/.minikube/machines/kubernetes-upgrade-050658 ...
	I0729 14:19:36.416624 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Building disk image from file:///home/jenkins/minikube-integration/19338-974764/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 14:19:36.416726 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Downloading /home/jenkins/minikube-integration/19338-974764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19338-974764/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 14:19:36.676752 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | I0729 14:19:36.676606 1019939 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/kubernetes-upgrade-050658/id_rsa...
	I0729 14:19:36.849661 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | I0729 14:19:36.849470 1019939 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/kubernetes-upgrade-050658/kubernetes-upgrade-050658.rawdisk...
	I0729 14:19:36.849697 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | Writing magic tar header
	I0729 14:19:36.849716 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | Writing SSH key tar header
	I0729 14:19:36.849729 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | I0729 14:19:36.849606 1019939 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19338-974764/.minikube/machines/kubernetes-upgrade-050658 ...
	I0729 14:19:36.849744 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube/machines/kubernetes-upgrade-050658 (perms=drwx------)
	I0729 14:19:36.849759 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/kubernetes-upgrade-050658
	I0729 14:19:36.849770 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube/machines (perms=drwxr-xr-x)
	I0729 14:19:36.849785 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube (perms=drwxr-xr-x)
	I0729 14:19:36.849798 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764 (perms=drwxrwxr-x)
	I0729 14:19:36.849816 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 14:19:36.849838 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube/machines
	I0729 14:19:36.849862 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 14:19:36.849874 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Creating domain...
	I0729 14:19:36.849893 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:19:36.849904 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764
	I0729 14:19:36.849948 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 14:19:36.849989 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | Checking permissions on dir: /home/jenkins
	I0729 14:19:36.850005 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | Checking permissions on dir: /home
	I0729 14:19:36.850027 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | Skipping /home - not owner
	I0729 14:19:36.851307 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) define libvirt domain using xml: 
	I0729 14:19:36.851337 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) <domain type='kvm'>
	I0729 14:19:36.851349 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)   <name>kubernetes-upgrade-050658</name>
	I0729 14:19:36.851357 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)   <memory unit='MiB'>2200</memory>
	I0729 14:19:36.851365 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)   <vcpu>2</vcpu>
	I0729 14:19:36.851371 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)   <features>
	I0729 14:19:36.851377 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)     <acpi/>
	I0729 14:19:36.851384 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)     <apic/>
	I0729 14:19:36.851390 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)     <pae/>
	I0729 14:19:36.851402 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)     
	I0729 14:19:36.851411 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)   </features>
	I0729 14:19:36.851418 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)   <cpu mode='host-passthrough'>
	I0729 14:19:36.851450 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)   
	I0729 14:19:36.851475 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)   </cpu>
	I0729 14:19:36.851488 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)   <os>
	I0729 14:19:36.851499 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)     <type>hvm</type>
	I0729 14:19:36.851517 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)     <boot dev='cdrom'/>
	I0729 14:19:36.851528 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)     <boot dev='hd'/>
	I0729 14:19:36.851541 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)     <bootmenu enable='no'/>
	I0729 14:19:36.851554 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)   </os>
	I0729 14:19:36.851567 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)   <devices>
	I0729 14:19:36.851580 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)     <disk type='file' device='cdrom'>
	I0729 14:19:36.851599 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)       <source file='/home/jenkins/minikube-integration/19338-974764/.minikube/machines/kubernetes-upgrade-050658/boot2docker.iso'/>
	I0729 14:19:36.851610 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)       <target dev='hdc' bus='scsi'/>
	I0729 14:19:36.851623 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)       <readonly/>
	I0729 14:19:36.851638 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)     </disk>
	I0729 14:19:36.851652 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)     <disk type='file' device='disk'>
	I0729 14:19:36.851665 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 14:19:36.851695 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)       <source file='/home/jenkins/minikube-integration/19338-974764/.minikube/machines/kubernetes-upgrade-050658/kubernetes-upgrade-050658.rawdisk'/>
	I0729 14:19:36.851715 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)       <target dev='hda' bus='virtio'/>
	I0729 14:19:36.851727 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)     </disk>
	I0729 14:19:36.851739 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)     <interface type='network'>
	I0729 14:19:36.851752 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)       <source network='mk-kubernetes-upgrade-050658'/>
	I0729 14:19:36.851767 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)       <model type='virtio'/>
	I0729 14:19:36.851779 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)     </interface>
	I0729 14:19:36.851796 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)     <interface type='network'>
	I0729 14:19:36.851810 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)       <source network='default'/>
	I0729 14:19:36.851821 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)       <model type='virtio'/>
	I0729 14:19:36.851833 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)     </interface>
	I0729 14:19:36.851844 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)     <serial type='pty'>
	I0729 14:19:36.851855 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)       <target port='0'/>
	I0729 14:19:36.851872 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)     </serial>
	I0729 14:19:36.851884 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)     <console type='pty'>
	I0729 14:19:36.851893 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)       <target type='serial' port='0'/>
	I0729 14:19:36.851903 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)     </console>
	I0729 14:19:36.851914 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)     <rng model='virtio'>
	I0729 14:19:36.851927 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)       <backend model='random'>/dev/random</backend>
	I0729 14:19:36.851937 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)     </rng>
	I0729 14:19:36.851948 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)     
	I0729 14:19:36.851959 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)     
	I0729 14:19:36.851967 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658)   </devices>
	I0729 14:19:36.851977 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) </domain>
	I0729 14:19:36.851988 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) 
	I0729 14:19:36.857160 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:65:83:12 in network default
	I0729 14:19:36.858122 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Ensuring networks are active...
	I0729 14:19:36.858162 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:19:36.858844 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Ensuring network default is active
	I0729 14:19:36.859247 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Ensuring network mk-kubernetes-upgrade-050658 is active
	I0729 14:19:36.859972 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Getting domain xml...
	I0729 14:19:36.861122 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Creating domain...
	I0729 14:19:37.194042 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Waiting to get IP...
	I0729 14:19:37.194795 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:19:37.195208 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | unable to find current IP address of domain kubernetes-upgrade-050658 in network mk-kubernetes-upgrade-050658
	I0729 14:19:37.195245 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | I0729 14:19:37.195198 1019939 retry.go:31] will retry after 249.088265ms: waiting for machine to come up
	I0729 14:19:37.445889 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:19:37.446433 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | unable to find current IP address of domain kubernetes-upgrade-050658 in network mk-kubernetes-upgrade-050658
	I0729 14:19:37.446460 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | I0729 14:19:37.446382 1019939 retry.go:31] will retry after 324.488153ms: waiting for machine to come up
	I0729 14:19:37.772883 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:19:37.773430 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | unable to find current IP address of domain kubernetes-upgrade-050658 in network mk-kubernetes-upgrade-050658
	I0729 14:19:37.773460 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | I0729 14:19:37.773370 1019939 retry.go:31] will retry after 372.704425ms: waiting for machine to come up
	I0729 14:19:38.147968 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:19:38.148694 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | unable to find current IP address of domain kubernetes-upgrade-050658 in network mk-kubernetes-upgrade-050658
	I0729 14:19:38.148724 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | I0729 14:19:38.148647 1019939 retry.go:31] will retry after 418.793195ms: waiting for machine to come up
	I0729 14:19:38.569328 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:19:38.569864 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | unable to find current IP address of domain kubernetes-upgrade-050658 in network mk-kubernetes-upgrade-050658
	I0729 14:19:38.569895 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | I0729 14:19:38.569805 1019939 retry.go:31] will retry after 752.930919ms: waiting for machine to come up
	I0729 14:19:39.324806 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:19:39.325309 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | unable to find current IP address of domain kubernetes-upgrade-050658 in network mk-kubernetes-upgrade-050658
	I0729 14:19:39.325339 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | I0729 14:19:39.325260 1019939 retry.go:31] will retry after 951.676131ms: waiting for machine to come up
	I0729 14:19:40.278408 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:19:40.278940 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | unable to find current IP address of domain kubernetes-upgrade-050658 in network mk-kubernetes-upgrade-050658
	I0729 14:19:40.278970 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | I0729 14:19:40.278879 1019939 retry.go:31] will retry after 1.113419002s: waiting for machine to come up
	I0729 14:19:41.393772 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:19:41.394312 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | unable to find current IP address of domain kubernetes-upgrade-050658 in network mk-kubernetes-upgrade-050658
	I0729 14:19:41.394339 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | I0729 14:19:41.394261 1019939 retry.go:31] will retry after 958.137783ms: waiting for machine to come up
	I0729 14:19:42.353607 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:19:42.354034 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | unable to find current IP address of domain kubernetes-upgrade-050658 in network mk-kubernetes-upgrade-050658
	I0729 14:19:42.354063 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | I0729 14:19:42.353988 1019939 retry.go:31] will retry after 1.777545826s: waiting for machine to come up
	I0729 14:19:44.133649 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:19:44.134204 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | unable to find current IP address of domain kubernetes-upgrade-050658 in network mk-kubernetes-upgrade-050658
	I0729 14:19:44.134236 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | I0729 14:19:44.134138 1019939 retry.go:31] will retry after 2.006119711s: waiting for machine to come up
	I0729 14:19:46.141824 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:19:46.142330 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | unable to find current IP address of domain kubernetes-upgrade-050658 in network mk-kubernetes-upgrade-050658
	I0729 14:19:46.142356 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | I0729 14:19:46.142294 1019939 retry.go:31] will retry after 2.421693496s: waiting for machine to come up
	I0729 14:19:48.566509 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:19:48.567014 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | unable to find current IP address of domain kubernetes-upgrade-050658 in network mk-kubernetes-upgrade-050658
	I0729 14:19:48.567083 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | I0729 14:19:48.566967 1019939 retry.go:31] will retry after 3.631426542s: waiting for machine to come up
	I0729 14:19:52.199861 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:19:52.200428 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | unable to find current IP address of domain kubernetes-upgrade-050658 in network mk-kubernetes-upgrade-050658
	I0729 14:19:52.200462 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | I0729 14:19:52.200363 1019939 retry.go:31] will retry after 3.627617798s: waiting for machine to come up
	I0729 14:19:55.829451 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:19:55.829871 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | unable to find current IP address of domain kubernetes-upgrade-050658 in network mk-kubernetes-upgrade-050658
	I0729 14:19:55.829904 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | I0729 14:19:55.829826 1019939 retry.go:31] will retry after 5.388975001s: waiting for machine to come up
	I0729 14:20:01.224787 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:01.225211 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Found IP for machine: 192.168.39.73
	I0729 14:20:01.225245 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has current primary IP address 192.168.39.73 and MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:01.225254 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Reserving static IP address...
	I0729 14:20:01.225585 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-050658", mac: "52:54:00:c3:e3:55", ip: "192.168.39.73"} in network mk-kubernetes-upgrade-050658
	I0729 14:20:01.301164 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | Getting to WaitForSSH function...
	I0729 14:20:01.301219 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Reserved static IP address: 192.168.39.73
	I0729 14:20:01.301234 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Waiting for SSH to be available...
	I0729 14:20:01.303470 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:01.303808 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:e3:55", ip: ""} in network mk-kubernetes-upgrade-050658: {Iface:virbr3 ExpiryTime:2024-07-29 15:19:50 +0000 UTC Type:0 Mac:52:54:00:c3:e3:55 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c3:e3:55}
	I0729 14:20:01.303861 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined IP address 192.168.39.73 and MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:01.303914 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | Using SSH client type: external
	I0729 14:20:01.303937 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/kubernetes-upgrade-050658/id_rsa (-rw-------)
	I0729 14:20:01.303965 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.73 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/kubernetes-upgrade-050658/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:20:01.303984 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | About to run SSH command:
	I0729 14:20:01.303998 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | exit 0
	I0729 14:20:01.432566 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | SSH cmd err, output: <nil>: 
	I0729 14:20:01.432877 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) KVM machine creation complete!
	I0729 14:20:01.433163 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetConfigRaw
	I0729 14:20:01.433824 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .DriverName
	I0729 14:20:01.434052 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .DriverName
	I0729 14:20:01.434212 1019233 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 14:20:01.434230 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetState
	I0729 14:20:01.435441 1019233 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 14:20:01.435458 1019233 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 14:20:01.435466 1019233 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 14:20:01.435474 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHHostname
	I0729 14:20:01.437761 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:01.438144 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:e3:55", ip: ""} in network mk-kubernetes-upgrade-050658: {Iface:virbr3 ExpiryTime:2024-07-29 15:19:50 +0000 UTC Type:0 Mac:52:54:00:c3:e3:55 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:kubernetes-upgrade-050658 Clientid:01:52:54:00:c3:e3:55}
	I0729 14:20:01.438178 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined IP address 192.168.39.73 and MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:01.438268 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHPort
	I0729 14:20:01.438468 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHKeyPath
	I0729 14:20:01.438668 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHKeyPath
	I0729 14:20:01.438833 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHUsername
	I0729 14:20:01.438995 1019233 main.go:141] libmachine: Using SSH client type: native
	I0729 14:20:01.439194 1019233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0729 14:20:01.439204 1019233 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 14:20:01.547913 1019233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:20:01.547940 1019233 main.go:141] libmachine: Detecting the provisioner...
	I0729 14:20:01.547950 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHHostname
	I0729 14:20:01.550970 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:01.551347 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:e3:55", ip: ""} in network mk-kubernetes-upgrade-050658: {Iface:virbr3 ExpiryTime:2024-07-29 15:19:50 +0000 UTC Type:0 Mac:52:54:00:c3:e3:55 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:kubernetes-upgrade-050658 Clientid:01:52:54:00:c3:e3:55}
	I0729 14:20:01.551384 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined IP address 192.168.39.73 and MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:01.551487 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHPort
	I0729 14:20:01.551685 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHKeyPath
	I0729 14:20:01.551881 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHKeyPath
	I0729 14:20:01.552021 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHUsername
	I0729 14:20:01.552244 1019233 main.go:141] libmachine: Using SSH client type: native
	I0729 14:20:01.552458 1019233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0729 14:20:01.552474 1019233 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 14:20:01.665074 1019233 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 14:20:01.665211 1019233 main.go:141] libmachine: found compatible host: buildroot
	I0729 14:20:01.665225 1019233 main.go:141] libmachine: Provisioning with buildroot...
	I0729 14:20:01.665237 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetMachineName
	I0729 14:20:01.665529 1019233 buildroot.go:166] provisioning hostname "kubernetes-upgrade-050658"
	I0729 14:20:01.665556 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetMachineName
	I0729 14:20:01.665743 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHHostname
	I0729 14:20:01.668232 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:01.668574 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:e3:55", ip: ""} in network mk-kubernetes-upgrade-050658: {Iface:virbr3 ExpiryTime:2024-07-29 15:19:50 +0000 UTC Type:0 Mac:52:54:00:c3:e3:55 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:kubernetes-upgrade-050658 Clientid:01:52:54:00:c3:e3:55}
	I0729 14:20:01.668604 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined IP address 192.168.39.73 and MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:01.668709 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHPort
	I0729 14:20:01.668886 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHKeyPath
	I0729 14:20:01.669034 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHKeyPath
	I0729 14:20:01.669177 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHUsername
	I0729 14:20:01.669371 1019233 main.go:141] libmachine: Using SSH client type: native
	I0729 14:20:01.669629 1019233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0729 14:20:01.669648 1019233 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-050658 && echo "kubernetes-upgrade-050658" | sudo tee /etc/hostname
	I0729 14:20:01.794152 1019233 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-050658
	
	I0729 14:20:01.794187 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHHostname
	I0729 14:20:01.796926 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:01.797246 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:e3:55", ip: ""} in network mk-kubernetes-upgrade-050658: {Iface:virbr3 ExpiryTime:2024-07-29 15:19:50 +0000 UTC Type:0 Mac:52:54:00:c3:e3:55 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:kubernetes-upgrade-050658 Clientid:01:52:54:00:c3:e3:55}
	I0729 14:20:01.797278 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined IP address 192.168.39.73 and MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:01.797455 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHPort
	I0729 14:20:01.797670 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHKeyPath
	I0729 14:20:01.797848 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHKeyPath
	I0729 14:20:01.797968 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHUsername
	I0729 14:20:01.798139 1019233 main.go:141] libmachine: Using SSH client type: native
	I0729 14:20:01.798326 1019233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0729 14:20:01.798349 1019233 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-050658' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-050658/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-050658' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:20:01.921774 1019233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:20:01.921814 1019233 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:20:01.921866 1019233 buildroot.go:174] setting up certificates
	I0729 14:20:01.921877 1019233 provision.go:84] configureAuth start
	I0729 14:20:01.921887 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetMachineName
	I0729 14:20:01.922214 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetIP
	I0729 14:20:01.925100 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:01.925634 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:e3:55", ip: ""} in network mk-kubernetes-upgrade-050658: {Iface:virbr3 ExpiryTime:2024-07-29 15:19:50 +0000 UTC Type:0 Mac:52:54:00:c3:e3:55 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:kubernetes-upgrade-050658 Clientid:01:52:54:00:c3:e3:55}
	I0729 14:20:01.925666 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined IP address 192.168.39.73 and MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:01.925825 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHHostname
	I0729 14:20:01.928182 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:01.928625 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:e3:55", ip: ""} in network mk-kubernetes-upgrade-050658: {Iface:virbr3 ExpiryTime:2024-07-29 15:19:50 +0000 UTC Type:0 Mac:52:54:00:c3:e3:55 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:kubernetes-upgrade-050658 Clientid:01:52:54:00:c3:e3:55}
	I0729 14:20:01.928650 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined IP address 192.168.39.73 and MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:01.928836 1019233 provision.go:143] copyHostCerts
	I0729 14:20:01.928907 1019233 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:20:01.928921 1019233 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:20:01.928987 1019233 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:20:01.929121 1019233 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:20:01.929133 1019233 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:20:01.929164 1019233 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:20:01.929251 1019233 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:20:01.929261 1019233 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:20:01.929291 1019233 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:20:01.929379 1019233 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-050658 san=[127.0.0.1 192.168.39.73 kubernetes-upgrade-050658 localhost minikube]
	I0729 14:20:01.990998 1019233 provision.go:177] copyRemoteCerts
	I0729 14:20:01.991069 1019233 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:20:01.991106 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHHostname
	I0729 14:20:01.993707 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:01.994108 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:e3:55", ip: ""} in network mk-kubernetes-upgrade-050658: {Iface:virbr3 ExpiryTime:2024-07-29 15:19:50 +0000 UTC Type:0 Mac:52:54:00:c3:e3:55 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:kubernetes-upgrade-050658 Clientid:01:52:54:00:c3:e3:55}
	I0729 14:20:01.994134 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined IP address 192.168.39.73 and MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:01.994347 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHPort
	I0729 14:20:01.994546 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHKeyPath
	I0729 14:20:01.994721 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHUsername
	I0729 14:20:01.994867 1019233 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/kubernetes-upgrade-050658/id_rsa Username:docker}
	I0729 14:20:02.082849 1019233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:20:02.106912 1019233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0729 14:20:02.129395 1019233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:20:02.152173 1019233 provision.go:87] duration metric: took 230.28293ms to configureAuth
	I0729 14:20:02.152205 1019233 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:20:02.152405 1019233 config.go:182] Loaded profile config "kubernetes-upgrade-050658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 14:20:02.152522 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHHostname
	I0729 14:20:02.155441 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:02.155897 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:e3:55", ip: ""} in network mk-kubernetes-upgrade-050658: {Iface:virbr3 ExpiryTime:2024-07-29 15:19:50 +0000 UTC Type:0 Mac:52:54:00:c3:e3:55 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:kubernetes-upgrade-050658 Clientid:01:52:54:00:c3:e3:55}
	I0729 14:20:02.155931 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined IP address 192.168.39.73 and MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:02.156155 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHPort
	I0729 14:20:02.156379 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHKeyPath
	I0729 14:20:02.156663 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHKeyPath
	I0729 14:20:02.156842 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHUsername
	I0729 14:20:02.157002 1019233 main.go:141] libmachine: Using SSH client type: native
	I0729 14:20:02.157182 1019233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0729 14:20:02.157202 1019233 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:20:02.427165 1019233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:20:02.427192 1019233 main.go:141] libmachine: Checking connection to Docker...
	I0729 14:20:02.427201 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetURL
	I0729 14:20:02.428589 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | Using libvirt version 6000000
	I0729 14:20:02.430551 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:02.430937 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:e3:55", ip: ""} in network mk-kubernetes-upgrade-050658: {Iface:virbr3 ExpiryTime:2024-07-29 15:19:50 +0000 UTC Type:0 Mac:52:54:00:c3:e3:55 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:kubernetes-upgrade-050658 Clientid:01:52:54:00:c3:e3:55}
	I0729 14:20:02.430967 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined IP address 192.168.39.73 and MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:02.431144 1019233 main.go:141] libmachine: Docker is up and running!
	I0729 14:20:02.431161 1019233 main.go:141] libmachine: Reticulating splines...
	I0729 14:20:02.431168 1019233 client.go:171] duration metric: took 26.100624726s to LocalClient.Create
	I0729 14:20:02.431193 1019233 start.go:167] duration metric: took 26.100689729s to libmachine.API.Create "kubernetes-upgrade-050658"
	I0729 14:20:02.431202 1019233 start.go:293] postStartSetup for "kubernetes-upgrade-050658" (driver="kvm2")
	I0729 14:20:02.431212 1019233 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:20:02.431240 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .DriverName
	I0729 14:20:02.431485 1019233 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:20:02.431503 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHHostname
	I0729 14:20:02.433685 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:02.434031 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:e3:55", ip: ""} in network mk-kubernetes-upgrade-050658: {Iface:virbr3 ExpiryTime:2024-07-29 15:19:50 +0000 UTC Type:0 Mac:52:54:00:c3:e3:55 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:kubernetes-upgrade-050658 Clientid:01:52:54:00:c3:e3:55}
	I0729 14:20:02.434065 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined IP address 192.168.39.73 and MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:02.434175 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHPort
	I0729 14:20:02.434352 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHKeyPath
	I0729 14:20:02.434502 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHUsername
	I0729 14:20:02.434660 1019233 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/kubernetes-upgrade-050658/id_rsa Username:docker}
	I0729 14:20:02.519246 1019233 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:20:02.523789 1019233 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:20:02.523821 1019233 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:20:02.523896 1019233 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:20:02.523987 1019233 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:20:02.524085 1019233 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:20:02.534466 1019233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:20:02.558589 1019233 start.go:296] duration metric: took 127.362361ms for postStartSetup
	I0729 14:20:02.558642 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetConfigRaw
	I0729 14:20:02.559237 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetIP
	I0729 14:20:02.562005 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:02.562414 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:e3:55", ip: ""} in network mk-kubernetes-upgrade-050658: {Iface:virbr3 ExpiryTime:2024-07-29 15:19:50 +0000 UTC Type:0 Mac:52:54:00:c3:e3:55 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:kubernetes-upgrade-050658 Clientid:01:52:54:00:c3:e3:55}
	I0729 14:20:02.562447 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined IP address 192.168.39.73 and MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:02.562726 1019233 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/config.json ...
	I0729 14:20:02.562914 1019233 start.go:128] duration metric: took 26.253488486s to createHost
	I0729 14:20:02.562938 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHHostname
	I0729 14:20:02.565319 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:02.565732 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:e3:55", ip: ""} in network mk-kubernetes-upgrade-050658: {Iface:virbr3 ExpiryTime:2024-07-29 15:19:50 +0000 UTC Type:0 Mac:52:54:00:c3:e3:55 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:kubernetes-upgrade-050658 Clientid:01:52:54:00:c3:e3:55}
	I0729 14:20:02.565771 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined IP address 192.168.39.73 and MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:02.565914 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHPort
	I0729 14:20:02.566126 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHKeyPath
	I0729 14:20:02.566366 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHKeyPath
	I0729 14:20:02.566491 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHUsername
	I0729 14:20:02.566618 1019233 main.go:141] libmachine: Using SSH client type: native
	I0729 14:20:02.566813 1019233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0729 14:20:02.566841 1019233 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 14:20:02.677355 1019233 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722262802.655194185
	
	I0729 14:20:02.677383 1019233 fix.go:216] guest clock: 1722262802.655194185
	I0729 14:20:02.677390 1019233 fix.go:229] Guest: 2024-07-29 14:20:02.655194185 +0000 UTC Remote: 2024-07-29 14:20:02.562926113 +0000 UTC m=+89.191532487 (delta=92.268072ms)
	I0729 14:20:02.677411 1019233 fix.go:200] guest clock delta is within tolerance: 92.268072ms
	I0729 14:20:02.677416 1019233 start.go:83] releasing machines lock for "kubernetes-upgrade-050658", held for 26.368161983s
	I0729 14:20:02.677451 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .DriverName
	I0729 14:20:02.677735 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetIP
	I0729 14:20:02.680671 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:02.681129 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:e3:55", ip: ""} in network mk-kubernetes-upgrade-050658: {Iface:virbr3 ExpiryTime:2024-07-29 15:19:50 +0000 UTC Type:0 Mac:52:54:00:c3:e3:55 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:kubernetes-upgrade-050658 Clientid:01:52:54:00:c3:e3:55}
	I0729 14:20:02.681152 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined IP address 192.168.39.73 and MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:02.681271 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .DriverName
	I0729 14:20:02.681840 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .DriverName
	I0729 14:20:02.682065 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .DriverName
	I0729 14:20:02.682179 1019233 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:20:02.682237 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHHostname
	I0729 14:20:02.682335 1019233 ssh_runner.go:195] Run: cat /version.json
	I0729 14:20:02.682362 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHHostname
	I0729 14:20:02.684984 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:02.685338 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:02.685406 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:e3:55", ip: ""} in network mk-kubernetes-upgrade-050658: {Iface:virbr3 ExpiryTime:2024-07-29 15:19:50 +0000 UTC Type:0 Mac:52:54:00:c3:e3:55 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:kubernetes-upgrade-050658 Clientid:01:52:54:00:c3:e3:55}
	I0729 14:20:02.685431 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined IP address 192.168.39.73 and MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:02.685593 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHPort
	I0729 14:20:02.685779 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHKeyPath
	I0729 14:20:02.685833 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:e3:55", ip: ""} in network mk-kubernetes-upgrade-050658: {Iface:virbr3 ExpiryTime:2024-07-29 15:19:50 +0000 UTC Type:0 Mac:52:54:00:c3:e3:55 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:kubernetes-upgrade-050658 Clientid:01:52:54:00:c3:e3:55}
	I0729 14:20:02.685867 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined IP address 192.168.39.73 and MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:02.685943 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHUsername
	I0729 14:20:02.686000 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHPort
	I0729 14:20:02.686104 1019233 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/kubernetes-upgrade-050658/id_rsa Username:docker}
	I0729 14:20:02.686192 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHKeyPath
	I0729 14:20:02.686330 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHUsername
	I0729 14:20:02.686458 1019233 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/kubernetes-upgrade-050658/id_rsa Username:docker}
	I0729 14:20:02.777573 1019233 ssh_runner.go:195] Run: systemctl --version
	I0729 14:20:02.798649 1019233 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:20:02.971777 1019233 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:20:02.978184 1019233 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:20:02.978261 1019233 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:20:02.995052 1019233 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:20:02.995083 1019233 start.go:495] detecting cgroup driver to use...
	I0729 14:20:02.995174 1019233 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:20:03.015954 1019233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:20:03.032537 1019233 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:20:03.032614 1019233 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:20:03.048942 1019233 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:20:03.064149 1019233 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:20:03.203746 1019233 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:20:03.371351 1019233 docker.go:233] disabling docker service ...
	I0729 14:20:03.371438 1019233 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:20:03.388780 1019233 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:20:03.404759 1019233 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:20:03.551754 1019233 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:20:03.681764 1019233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:20:03.696679 1019233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:20:03.716300 1019233 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 14:20:03.716379 1019233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:20:03.728506 1019233 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:20:03.728579 1019233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:20:03.740469 1019233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:20:03.752348 1019233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:20:03.764809 1019233 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:20:03.776858 1019233 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:20:03.787234 1019233 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:20:03.787306 1019233 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:20:03.801251 1019233 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:20:03.819369 1019233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:20:03.945624 1019233 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:20:04.090980 1019233 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:20:04.091070 1019233 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:20:04.096067 1019233 start.go:563] Will wait 60s for crictl version
	I0729 14:20:04.096125 1019233 ssh_runner.go:195] Run: which crictl
	I0729 14:20:04.100097 1019233 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:20:04.142391 1019233 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:20:04.142479 1019233 ssh_runner.go:195] Run: crio --version
	I0729 14:20:04.174337 1019233 ssh_runner.go:195] Run: crio --version
	I0729 14:20:04.207301 1019233 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 14:20:04.208706 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetIP
	I0729 14:20:04.211821 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:04.212263 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:e3:55", ip: ""} in network mk-kubernetes-upgrade-050658: {Iface:virbr3 ExpiryTime:2024-07-29 15:19:50 +0000 UTC Type:0 Mac:52:54:00:c3:e3:55 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:kubernetes-upgrade-050658 Clientid:01:52:54:00:c3:e3:55}
	I0729 14:20:04.212292 1019233 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined IP address 192.168.39.73 and MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:20:04.212626 1019233 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 14:20:04.216859 1019233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:20:04.229410 1019233 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-050658 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-050658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.73 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:20:04.229513 1019233 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 14:20:04.229569 1019233 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:20:04.263780 1019233 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 14:20:04.263858 1019233 ssh_runner.go:195] Run: which lz4
	I0729 14:20:04.268180 1019233 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 14:20:04.272535 1019233 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 14:20:04.272571 1019233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 14:20:05.962531 1019233 crio.go:462] duration metric: took 1.694375536s to copy over tarball
	I0729 14:20:05.962659 1019233 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 14:20:08.658593 1019233 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.695894281s)
	I0729 14:20:08.658651 1019233 crio.go:469] duration metric: took 2.696085153s to extract the tarball
	I0729 14:20:08.658675 1019233 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 14:20:08.704716 1019233 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:20:08.754482 1019233 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 14:20:08.754518 1019233 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 14:20:08.754590 1019233 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:20:08.754663 1019233 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:20:08.754683 1019233 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 14:20:08.754712 1019233 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 14:20:08.754601 1019233 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:20:08.754682 1019233 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 14:20:08.754715 1019233 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:20:08.754691 1019233 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:20:08.756535 1019233 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:20:08.756603 1019233 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:20:08.756617 1019233 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 14:20:08.756628 1019233 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:20:08.756681 1019233 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:20:08.756687 1019233 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 14:20:08.756690 1019233 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:20:08.756531 1019233 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 14:20:08.915141 1019233 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 14:20:08.917884 1019233 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 14:20:08.919132 1019233 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:20:08.941964 1019233 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:20:08.946443 1019233 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:20:08.962438 1019233 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:20:08.983032 1019233 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 14:20:09.005981 1019233 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 14:20:09.006053 1019233 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 14:20:09.006113 1019233 ssh_runner.go:195] Run: which crictl
	I0729 14:20:09.018681 1019233 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 14:20:09.018695 1019233 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 14:20:09.018730 1019233 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 14:20:09.018731 1019233 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:20:09.018774 1019233 ssh_runner.go:195] Run: which crictl
	I0729 14:20:09.018774 1019233 ssh_runner.go:195] Run: which crictl
	I0729 14:20:09.047189 1019233 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:20:09.110978 1019233 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 14:20:09.111033 1019233 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:20:09.111036 1019233 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 14:20:09.111072 1019233 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:20:09.111112 1019233 ssh_runner.go:195] Run: which crictl
	I0729 14:20:09.111118 1019233 ssh_runner.go:195] Run: which crictl
	I0729 14:20:09.117326 1019233 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 14:20:09.117397 1019233 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 14:20:09.117420 1019233 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:20:09.117437 1019233 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:20:09.117426 1019233 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 14:20:09.117488 1019233 ssh_runner.go:195] Run: which crictl
	I0729 14:20:09.117504 1019233 ssh_runner.go:195] Run: which crictl
	I0729 14:20:09.117373 1019233 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 14:20:09.117557 1019233 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 14:20:09.257843 1019233 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:20:09.257896 1019233 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:20:09.257853 1019233 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:20:09.257978 1019233 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 14:20:09.258009 1019233 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 14:20:09.258092 1019233 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 14:20:09.258107 1019233 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 14:20:09.351478 1019233 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 14:20:09.351525 1019233 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 14:20:09.351549 1019233 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 14:20:09.351595 1019233 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 14:20:09.351665 1019233 cache_images.go:92] duration metric: took 597.130755ms to LoadCachedImages
	W0729 14:20:09.351771 1019233 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0729 14:20:09.351787 1019233 kubeadm.go:934] updating node { 192.168.39.73 8443 v1.20.0 crio true true} ...
	I0729 14:20:09.351926 1019233 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-050658 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-050658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:20:09.352040 1019233 ssh_runner.go:195] Run: crio config
	I0729 14:20:09.399977 1019233 cni.go:84] Creating CNI manager for ""
	I0729 14:20:09.399999 1019233 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:20:09.400009 1019233 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:20:09.400028 1019233 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.73 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-050658 NodeName:kubernetes-upgrade-050658 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.73 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 14:20:09.400159 1019233 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.73
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-050658"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.73
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.73"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:20:09.400221 1019233 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 14:20:09.412350 1019233 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:20:09.412448 1019233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:20:09.424101 1019233 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0729 14:20:09.442153 1019233 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 14:20:09.459118 1019233 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0729 14:20:09.476085 1019233 ssh_runner.go:195] Run: grep 192.168.39.73	control-plane.minikube.internal$ /etc/hosts
	I0729 14:20:09.480100 1019233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.73	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:20:09.494563 1019233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:20:09.625276 1019233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:20:09.642775 1019233 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658 for IP: 192.168.39.73
	I0729 14:20:09.642801 1019233 certs.go:194] generating shared ca certs ...
	I0729 14:20:09.642822 1019233 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:20:09.643026 1019233 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:20:09.643078 1019233 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:20:09.643090 1019233 certs.go:256] generating profile certs ...
	I0729 14:20:09.643158 1019233 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/client.key
	I0729 14:20:09.643176 1019233 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/client.crt with IP's: []
	I0729 14:20:09.748146 1019233 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/client.crt ...
	I0729 14:20:09.748192 1019233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/client.crt: {Name:mk2df17d4e9f972a5607b5db5cdc0af7af010466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:20:09.748451 1019233 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/client.key ...
	I0729 14:20:09.748483 1019233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/client.key: {Name:mk747587e1d4b0e55747ab0e140f017b1e9d6fe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:20:09.748641 1019233 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/apiserver.key.0c8019b1
	I0729 14:20:09.748667 1019233 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/apiserver.crt.0c8019b1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.73]
	I0729 14:20:09.855852 1019233 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/apiserver.crt.0c8019b1 ...
	I0729 14:20:09.855887 1019233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/apiserver.crt.0c8019b1: {Name:mke2d832a51fa109a56336510619e67a128949d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:20:09.856079 1019233 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/apiserver.key.0c8019b1 ...
	I0729 14:20:09.856107 1019233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/apiserver.key.0c8019b1: {Name:mk0c38b7750b09c8b1d536236b81a37739c87d7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:20:09.856228 1019233 certs.go:381] copying /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/apiserver.crt.0c8019b1 -> /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/apiserver.crt
	I0729 14:20:09.856327 1019233 certs.go:385] copying /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/apiserver.key.0c8019b1 -> /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/apiserver.key
	I0729 14:20:09.856431 1019233 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/proxy-client.key
	I0729 14:20:09.856455 1019233 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/proxy-client.crt with IP's: []
	I0729 14:20:09.952386 1019233 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/proxy-client.crt ...
	I0729 14:20:09.952434 1019233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/proxy-client.crt: {Name:mkdbaab7e2ce269685ea3fe73422edcb0968e5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:20:09.952622 1019233 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/proxy-client.key ...
	I0729 14:20:09.952642 1019233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/proxy-client.key: {Name:mkefa2218458004e4eba547d9f7718ab489ae3d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:20:09.952875 1019233 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:20:09.952922 1019233 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:20:09.952933 1019233 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:20:09.952952 1019233 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:20:09.952978 1019233 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:20:09.953002 1019233 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:20:09.953041 1019233 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:20:09.953669 1019233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:20:09.982634 1019233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:20:10.010263 1019233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:20:10.037586 1019233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:20:10.069209 1019233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 14:20:10.098258 1019233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 14:20:10.126756 1019233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:20:10.254295 1019233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 14:20:10.392326 1019233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:20:10.421601 1019233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:20:10.450673 1019233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:20:10.482720 1019233 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:20:10.503465 1019233 ssh_runner.go:195] Run: openssl version
	I0729 14:20:10.511593 1019233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:20:10.536862 1019233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:20:10.543833 1019233 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:20:10.543913 1019233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:20:10.552116 1019233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:20:10.570152 1019233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:20:10.586224 1019233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:20:10.590970 1019233 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:20:10.591033 1019233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:20:10.597088 1019233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:20:10.608808 1019233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:20:10.622327 1019233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:20:10.627367 1019233 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:20:10.627461 1019233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:20:10.633433 1019233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:20:10.645892 1019233 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:20:10.650477 1019233 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 14:20:10.650548 1019233 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-050658 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-050658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.73 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:20:10.650636 1019233 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:20:10.650702 1019233 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:20:10.703174 1019233 cri.go:89] found id: ""
	I0729 14:20:10.703288 1019233 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:20:10.714688 1019233 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:20:10.725130 1019233 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:20:10.735523 1019233 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:20:10.735550 1019233 kubeadm.go:157] found existing configuration files:
	
	I0729 14:20:10.735607 1019233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:20:10.746727 1019233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:20:10.746814 1019233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:20:10.757469 1019233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:20:10.767700 1019233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:20:10.767765 1019233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:20:10.780572 1019233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:20:10.794007 1019233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:20:10.794088 1019233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:20:10.807959 1019233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:20:10.821067 1019233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:20:10.821137 1019233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:20:10.832532 1019233 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:20:11.138279 1019233 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:22:09.349684 1019233 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 14:22:09.349871 1019233 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 14:22:09.351460 1019233 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 14:22:09.351559 1019233 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:22:09.351750 1019233 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:22:09.352226 1019233 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:22:09.352949 1019233 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 14:22:09.353086 1019233 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:22:09.354897 1019233 out.go:204]   - Generating certificates and keys ...
	I0729 14:22:09.355011 1019233 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:22:09.355096 1019233 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:22:09.355197 1019233 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 14:22:09.355281 1019233 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 14:22:09.355362 1019233 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 14:22:09.355451 1019233 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 14:22:09.355531 1019233 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 14:22:09.355672 1019233 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-050658 localhost] and IPs [192.168.39.73 127.0.0.1 ::1]
	I0729 14:22:09.355754 1019233 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 14:22:09.355887 1019233 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-050658 localhost] and IPs [192.168.39.73 127.0.0.1 ::1]
	I0729 14:22:09.355978 1019233 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 14:22:09.356086 1019233 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 14:22:09.356168 1019233 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 14:22:09.356257 1019233 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:22:09.356324 1019233 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:22:09.356404 1019233 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:22:09.356517 1019233 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:22:09.356573 1019233 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:22:09.356698 1019233 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:22:09.356822 1019233 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:22:09.356888 1019233 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:22:09.356974 1019233 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:22:09.358715 1019233 out.go:204]   - Booting up control plane ...
	I0729 14:22:09.358838 1019233 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:22:09.358954 1019233 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:22:09.359063 1019233 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:22:09.359187 1019233 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:22:09.359425 1019233 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 14:22:09.359504 1019233 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 14:22:09.359599 1019233 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:22:09.359837 1019233 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:22:09.359941 1019233 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:22:09.360191 1019233 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:22:09.360284 1019233 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:22:09.360576 1019233 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:22:09.360674 1019233 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:22:09.360947 1019233 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:22:09.361051 1019233 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:22:09.361292 1019233 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:22:09.361303 1019233 kubeadm.go:310] 
	I0729 14:22:09.361360 1019233 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 14:22:09.361415 1019233 kubeadm.go:310] 		timed out waiting for the condition
	I0729 14:22:09.361425 1019233 kubeadm.go:310] 
	I0729 14:22:09.361474 1019233 kubeadm.go:310] 	This error is likely caused by:
	I0729 14:22:09.361520 1019233 kubeadm.go:310] 		- The kubelet is not running
	I0729 14:22:09.361667 1019233 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 14:22:09.361683 1019233 kubeadm.go:310] 
	I0729 14:22:09.361817 1019233 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 14:22:09.361868 1019233 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 14:22:09.362021 1019233 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 14:22:09.362055 1019233 kubeadm.go:310] 
	I0729 14:22:09.362225 1019233 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 14:22:09.362343 1019233 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 14:22:09.362354 1019233 kubeadm.go:310] 
	I0729 14:22:09.362518 1019233 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 14:22:09.362652 1019233 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 14:22:09.362752 1019233 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 14:22:09.362860 1019233 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 14:22:09.362930 1019233 kubeadm.go:310] 
	W0729 14:22:09.363019 1019233 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-050658 localhost] and IPs [192.168.39.73 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-050658 localhost] and IPs [192.168.39.73 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-050658 localhost] and IPs [192.168.39.73 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-050658 localhost] and IPs [192.168.39.73 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 14:22:09.363079 1019233 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:22:09.894290 1019233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:22:09.911242 1019233 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:22:09.924488 1019233 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:22:09.924515 1019233 kubeadm.go:157] found existing configuration files:
	
	I0729 14:22:09.924574 1019233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:22:09.934080 1019233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:22:09.934160 1019233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:22:09.944748 1019233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:22:09.954364 1019233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:22:09.954429 1019233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:22:09.964181 1019233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:22:09.973901 1019233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:22:09.973948 1019233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:22:09.983655 1019233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:22:09.994650 1019233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:22:09.994713 1019233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:22:10.004538 1019233 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:22:10.092363 1019233 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 14:22:10.092527 1019233 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:22:10.281754 1019233 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:22:10.281937 1019233 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:22:10.282090 1019233 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 14:22:10.499727 1019233 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:22:10.501847 1019233 out.go:204]   - Generating certificates and keys ...
	I0729 14:22:10.501966 1019233 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:22:10.502059 1019233 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:22:10.502153 1019233 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:22:10.502253 1019233 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:22:10.502370 1019233 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:22:10.502446 1019233 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:22:10.503229 1019233 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:22:10.504393 1019233 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:22:10.505599 1019233 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:22:10.506978 1019233 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:22:10.507250 1019233 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:22:10.507335 1019233 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:22:10.771055 1019233 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:22:10.849746 1019233 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:22:11.206952 1019233 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:22:11.357291 1019233 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:22:11.380148 1019233 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:22:11.381760 1019233 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:22:11.381880 1019233 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:22:11.578786 1019233 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:22:11.806771 1019233 out.go:204]   - Booting up control plane ...
	I0729 14:22:11.806952 1019233 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:22:11.807072 1019233 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:22:11.807203 1019233 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:22:11.807339 1019233 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:22:11.807580 1019233 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 14:22:51.592480 1019233 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 14:22:51.592616 1019233 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:22:51.592903 1019233 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:22:56.593450 1019233 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:22:56.593736 1019233 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:23:06.593982 1019233 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:23:06.594297 1019233 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:23:26.592928 1019233 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:23:26.593240 1019233 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:24:06.592463 1019233 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:24:06.592710 1019233 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:24:06.592747 1019233 kubeadm.go:310] 
	I0729 14:24:06.592800 1019233 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 14:24:06.592854 1019233 kubeadm.go:310] 		timed out waiting for the condition
	I0729 14:24:06.592866 1019233 kubeadm.go:310] 
	I0729 14:24:06.592907 1019233 kubeadm.go:310] 	This error is likely caused by:
	I0729 14:24:06.592948 1019233 kubeadm.go:310] 		- The kubelet is not running
	I0729 14:24:06.593074 1019233 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 14:24:06.593081 1019233 kubeadm.go:310] 
	I0729 14:24:06.593208 1019233 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 14:24:06.593250 1019233 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 14:24:06.593286 1019233 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 14:24:06.593292 1019233 kubeadm.go:310] 
	I0729 14:24:06.593418 1019233 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 14:24:06.593522 1019233 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 14:24:06.593549 1019233 kubeadm.go:310] 
	I0729 14:24:06.593642 1019233 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 14:24:06.593713 1019233 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 14:24:06.593773 1019233 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 14:24:06.593852 1019233 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 14:24:06.593859 1019233 kubeadm.go:310] 
	I0729 14:24:06.595438 1019233 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:24:06.595573 1019233 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 14:24:06.595691 1019233 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 14:24:06.595754 1019233 kubeadm.go:394] duration metric: took 3m55.945222684s to StartCluster
	I0729 14:24:06.595806 1019233 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:24:06.595870 1019233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:24:06.654363 1019233 cri.go:89] found id: ""
	I0729 14:24:06.654398 1019233 logs.go:276] 0 containers: []
	W0729 14:24:06.654406 1019233 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:24:06.654412 1019233 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:24:06.654479 1019233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:24:06.698409 1019233 cri.go:89] found id: ""
	I0729 14:24:06.698445 1019233 logs.go:276] 0 containers: []
	W0729 14:24:06.698458 1019233 logs.go:278] No container was found matching "etcd"
	I0729 14:24:06.698465 1019233 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:24:06.698534 1019233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:24:06.734936 1019233 cri.go:89] found id: ""
	I0729 14:24:06.734965 1019233 logs.go:276] 0 containers: []
	W0729 14:24:06.734974 1019233 logs.go:278] No container was found matching "coredns"
	I0729 14:24:06.734980 1019233 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:24:06.735040 1019233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:24:06.786911 1019233 cri.go:89] found id: ""
	I0729 14:24:06.786942 1019233 logs.go:276] 0 containers: []
	W0729 14:24:06.786955 1019233 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:24:06.786965 1019233 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:24:06.787038 1019233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:24:06.828516 1019233 cri.go:89] found id: ""
	I0729 14:24:06.828549 1019233 logs.go:276] 0 containers: []
	W0729 14:24:06.828560 1019233 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:24:06.828568 1019233 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:24:06.828632 1019233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:24:06.881730 1019233 cri.go:89] found id: ""
	I0729 14:24:06.881767 1019233 logs.go:276] 0 containers: []
	W0729 14:24:06.881782 1019233 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:24:06.881790 1019233 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:24:06.881854 1019233 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:24:06.937267 1019233 cri.go:89] found id: ""
	I0729 14:24:06.937291 1019233 logs.go:276] 0 containers: []
	W0729 14:24:06.937299 1019233 logs.go:278] No container was found matching "kindnet"
	I0729 14:24:06.937309 1019233 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:24:06.937331 1019233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:24:07.059934 1019233 logs.go:123] Gathering logs for container status ...
	I0729 14:24:07.059989 1019233 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:24:07.107295 1019233 logs.go:123] Gathering logs for kubelet ...
	I0729 14:24:07.107337 1019233 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:24:07.174065 1019233 logs.go:123] Gathering logs for dmesg ...
	I0729 14:24:07.174104 1019233 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:24:07.192007 1019233 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:24:07.192109 1019233 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:24:07.378671 1019233 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0729 14:24:07.378739 1019233 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 14:24:07.378801 1019233 out.go:239] * 
	* 
	W0729 14:24:07.378901 1019233 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 14:24:07.378941 1019233 out.go:239] * 
	* 
	W0729 14:24:07.380198 1019233 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 14:24:07.384375 1019233 out.go:177] 
	W0729 14:24:07.385946 1019233 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 14:24:07.386144 1019233 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 14:24:07.386178 1019233 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 14:24:07.387578 1019233 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-050658 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-050658
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-050658: (2.227348941s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-050658 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-050658 status --format={{.Host}}: exit status 7 (87.999622ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-050658 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-050658 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (45.010336068s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-050658 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-050658 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-050658 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (100.415463ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-050658] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19338
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-050658
	    minikube start -p kubernetes-upgrade-050658 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0506582 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-050658 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-050658 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-050658 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m19.200814518s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-29 14:26:14.144238949 +0000 UTC m=+4460.551962720
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-050658 -n kubernetes-upgrade-050658
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-050658 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-050658 logs -n 25: (1.897799534s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-513289 sudo systemctl                        | auto-513289            | jenkins | v1.33.1 | 29 Jul 24 14:25 UTC | 29 Jul 24 14:25 UTC |
	|         | cat kubelet --no-pager                               |                        |         |         |                     |                     |
	| ssh     | -p auto-513289 sudo journalctl                       | auto-513289            | jenkins | v1.33.1 | 29 Jul 24 14:25 UTC | 29 Jul 24 14:25 UTC |
	|         | -xeu kubelet --all --full                            |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p auto-513289 sudo cat                              | auto-513289            | jenkins | v1.33.1 | 29 Jul 24 14:25 UTC | 29 Jul 24 14:25 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                        |         |         |                     |                     |
	| ssh     | -p auto-513289 sudo cat                              | auto-513289            | jenkins | v1.33.1 | 29 Jul 24 14:25 UTC | 29 Jul 24 14:25 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p auto-513289 sudo systemctl                        | auto-513289            | jenkins | v1.33.1 | 29 Jul 24 14:25 UTC |                     |
	|         | status docker --all --full                           |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p auto-513289 sudo systemctl                        | auto-513289            | jenkins | v1.33.1 | 29 Jul 24 14:25 UTC | 29 Jul 24 14:25 UTC |
	|         | cat docker --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p auto-513289 sudo cat                              | auto-513289            | jenkins | v1.33.1 | 29 Jul 24 14:25 UTC | 29 Jul 24 14:25 UTC |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p auto-513289 sudo docker                           | auto-513289            | jenkins | v1.33.1 | 29 Jul 24 14:25 UTC |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p auto-513289 sudo systemctl                        | auto-513289            | jenkins | v1.33.1 | 29 Jul 24 14:25 UTC |                     |
	|         | status cri-docker --all --full                       |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p auto-513289 sudo systemctl                        | auto-513289            | jenkins | v1.33.1 | 29 Jul 24 14:25 UTC | 29 Jul 24 14:25 UTC |
	|         | cat cri-docker --no-pager                            |                        |         |         |                     |                     |
	| ssh     | -p auto-513289 sudo cat                              | auto-513289            | jenkins | v1.33.1 | 29 Jul 24 14:25 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p auto-513289 sudo cat                              | auto-513289            | jenkins | v1.33.1 | 29 Jul 24 14:25 UTC | 29 Jul 24 14:25 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p auto-513289 sudo                                  | auto-513289            | jenkins | v1.33.1 | 29 Jul 24 14:25 UTC | 29 Jul 24 14:25 UTC |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p auto-513289 sudo systemctl                        | auto-513289            | jenkins | v1.33.1 | 29 Jul 24 14:25 UTC |                     |
	|         | status containerd --all --full                       |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p auto-513289 sudo systemctl                        | auto-513289            | jenkins | v1.33.1 | 29 Jul 24 14:25 UTC | 29 Jul 24 14:25 UTC |
	|         | cat containerd --no-pager                            |                        |         |         |                     |                     |
	| ssh     | -p auto-513289 sudo cat                              | auto-513289            | jenkins | v1.33.1 | 29 Jul 24 14:25 UTC | 29 Jul 24 14:25 UTC |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p auto-513289 sudo cat                              | auto-513289            | jenkins | v1.33.1 | 29 Jul 24 14:25 UTC | 29 Jul 24 14:25 UTC |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p auto-513289 sudo containerd                       | auto-513289            | jenkins | v1.33.1 | 29 Jul 24 14:25 UTC | 29 Jul 24 14:25 UTC |
	|         | config dump                                          |                        |         |         |                     |                     |
	| ssh     | -p auto-513289 sudo systemctl                        | auto-513289            | jenkins | v1.33.1 | 29 Jul 24 14:25 UTC | 29 Jul 24 14:25 UTC |
	|         | status crio --all --full                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p auto-513289 sudo systemctl                        | auto-513289            | jenkins | v1.33.1 | 29 Jul 24 14:25 UTC | 29 Jul 24 14:25 UTC |
	|         | cat crio --no-pager                                  |                        |         |         |                     |                     |
	| ssh     | -p auto-513289 sudo find                             | auto-513289            | jenkins | v1.33.1 | 29 Jul 24 14:25 UTC | 29 Jul 24 14:25 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p auto-513289 sudo crio                             | auto-513289            | jenkins | v1.33.1 | 29 Jul 24 14:25 UTC | 29 Jul 24 14:25 UTC |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p auto-513289                                       | auto-513289            | jenkins | v1.33.1 | 29 Jul 24 14:25 UTC | 29 Jul 24 14:25 UTC |
	| start   | -p calico-513289 --memory=3072                       | calico-513289          | jenkins | v1.33.1 | 29 Jul 24 14:25 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                        |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                           |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| start   | -p cert-expiration-869983                            | cert-expiration-869983 | jenkins | v1.33.1 | 29 Jul 24 14:26 UTC |                     |
	|         | --memory=2048                                        |                        |         |         |                     |                     |
	|         | --cert-expiration=8760h                              |                        |         |         |                     |                     |
	|         | --driver=kvm2                                        |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 14:26:10
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 14:26:10.334274 1026565 out.go:291] Setting OutFile to fd 1 ...
	I0729 14:26:10.334427 1026565 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:26:10.334432 1026565 out.go:304] Setting ErrFile to fd 2...
	I0729 14:26:10.334438 1026565 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:26:10.334728 1026565 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 14:26:10.335237 1026565 out.go:298] Setting JSON to false
	I0729 14:26:10.336340 1026565 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":14922,"bootTime":1722248248,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 14:26:10.336434 1026565 start.go:139] virtualization: kvm guest
	I0729 14:26:10.338660 1026565 out.go:177] * [cert-expiration-869983] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 14:26:10.340188 1026565 notify.go:220] Checking for updates...
	I0729 14:26:10.340259 1026565 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 14:26:10.341772 1026565 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 14:26:10.343144 1026565 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:26:10.344565 1026565 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:26:10.346071 1026565 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 14:26:10.347495 1026565 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 14:26:10.349445 1026565 config.go:182] Loaded profile config "cert-expiration-869983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:26:10.350044 1026565 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:26:10.350136 1026565 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:26:10.369007 1026565 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35493
	I0729 14:26:10.369464 1026565 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:26:10.370011 1026565 main.go:141] libmachine: Using API Version  1
	I0729 14:26:10.370027 1026565 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:26:10.370371 1026565 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:26:10.370547 1026565 main.go:141] libmachine: (cert-expiration-869983) Calling .DriverName
	I0729 14:26:10.370793 1026565 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 14:26:10.371082 1026565 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:26:10.371105 1026565 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:26:10.386945 1026565 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44991
	I0729 14:26:10.387569 1026565 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:26:10.388293 1026565 main.go:141] libmachine: Using API Version  1
	I0729 14:26:10.388307 1026565 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:26:10.388712 1026565 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:26:10.388907 1026565 main.go:141] libmachine: (cert-expiration-869983) Calling .DriverName
	I0729 14:26:10.429418 1026565 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 14:26:10.430693 1026565 start.go:297] selected driver: kvm2
	I0729 14:26:10.430719 1026565 start.go:901] validating driver "kvm2" against &{Name:cert-expiration-869983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:cert-expiration-869983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:26:10.430881 1026565 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 14:26:10.432122 1026565 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:26:10.432230 1026565 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19338-974764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 14:26:10.453145 1026565 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 14:26:10.453596 1026565 cni.go:84] Creating CNI manager for ""
	I0729 14:26:10.453610 1026565 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:26:10.453697 1026565 start.go:340] cluster config:
	{Name:cert-expiration-869983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:cert-expiration-869983 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:26:10.453869 1026565 iso.go:125] acquiring lock: {Name:mk2bc72146110e230952d77b90cad2ea8182c9d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:26:10.456675 1026565 out.go:177] * Starting "cert-expiration-869983" primary control-plane node in "cert-expiration-869983" cluster
	I0729 14:26:11.151681 1024596 api_server.go:279] https://192.168.39.73:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:26:11.151714 1024596 api_server.go:103] status: https://192.168.39.73:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:26:11.151727 1024596 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8443/healthz ...
	I0729 14:26:11.196287 1024596 api_server.go:279] https://192.168.39.73:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:26:11.196323 1024596 api_server.go:103] status: https://192.168.39.73:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:26:11.542777 1024596 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8443/healthz ...
	I0729 14:26:11.548762 1024596 api_server.go:279] https://192.168.39.73:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:26:11.548796 1024596 api_server.go:103] status: https://192.168.39.73:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:26:12.043425 1024596 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8443/healthz ...
	I0729 14:26:12.051987 1024596 api_server.go:279] https://192.168.39.73:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:26:12.052026 1024596 api_server.go:103] status: https://192.168.39.73:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:26:12.542617 1024596 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8443/healthz ...
	I0729 14:26:12.549626 1024596 api_server.go:279] https://192.168.39.73:8443/healthz returned 200:
	ok
	I0729 14:26:12.557520 1024596 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 14:26:12.557550 1024596 api_server.go:131] duration metric: took 4.515113922s to wait for apiserver health ...
	I0729 14:26:12.557561 1024596 cni.go:84] Creating CNI manager for ""
	I0729 14:26:12.557570 1024596 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:26:12.559637 1024596 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:26:12.562739 1024596 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:26:12.575874 1024596 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:26:12.597891 1024596 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:26:12.607870 1024596 system_pods.go:59] 8 kube-system pods found
	I0729 14:26:12.607903 1024596 system_pods.go:61] "coredns-5cfdc65f69-dz5lj" [07b93682-b9c2-4d8c-a722-4782ac979449] Running
	I0729 14:26:12.607909 1024596 system_pods.go:61] "coredns-5cfdc65f69-kbth5" [762e5cf2-8e5d-4a11-b412-9fdb7912ee51] Running
	I0729 14:26:12.607918 1024596 system_pods.go:61] "etcd-kubernetes-upgrade-050658" [9fc5925b-e932-4693-a1d8-f394e78cf5b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 14:26:12.607934 1024596 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-050658" [0fbc72d4-98a4-49a6-b947-501df0f71dd2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 14:26:12.607944 1024596 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-050658" [ebdf1d70-6d26-4fe9-b720-a186e8ed6712] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 14:26:12.607950 1024596 system_pods.go:61] "kube-proxy-mw5bm" [85996dda-66e6-47fa-86ca-2d78b4316af4] Running
	I0729 14:26:12.607957 1024596 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-050658" [b3fae87f-319f-47f2-8d05-94a951619cbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 14:26:12.607969 1024596 system_pods.go:61] "storage-provisioner" [053f9f9e-0322-4e1b-b1d3-560c6baa7479] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 14:26:12.607979 1024596 system_pods.go:74] duration metric: took 10.058707ms to wait for pod list to return data ...
	I0729 14:26:12.607998 1024596 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:26:12.612224 1024596 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:26:12.612251 1024596 node_conditions.go:123] node cpu capacity is 2
	I0729 14:26:12.612264 1024596 node_conditions.go:105] duration metric: took 4.259044ms to run NodePressure ...
	I0729 14:26:12.612289 1024596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:26:12.927845 1024596 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 14:26:12.940280 1024596 ops.go:34] apiserver oom_adj: -16
	I0729 14:26:12.940301 1024596 kubeadm.go:597] duration metric: took 27.979569596s to restartPrimaryControlPlane
	I0729 14:26:12.940309 1024596 kubeadm.go:394] duration metric: took 28.12993641s to StartCluster
	I0729 14:26:12.940330 1024596 settings.go:142] acquiring lock: {Name:mke61e73d7bb1a5bd9c2f4c9e9bba0a07b199ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:26:12.940442 1024596 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:26:12.941457 1024596 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:26:12.941711 1024596 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.73 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:26:12.941778 1024596 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 14:26:12.941866 1024596 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-050658"
	I0729 14:26:12.941887 1024596 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-050658"
	I0729 14:26:12.941923 1024596 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-050658"
	I0729 14:26:12.941970 1024596 config.go:182] Loaded profile config "kubernetes-upgrade-050658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 14:26:12.941898 1024596 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-050658"
	W0729 14:26:12.941998 1024596 addons.go:243] addon storage-provisioner should already be in state true
	I0729 14:26:12.942031 1024596 host.go:66] Checking if "kubernetes-upgrade-050658" exists ...
	I0729 14:26:12.942402 1024596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:26:12.942417 1024596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:26:12.942451 1024596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:26:12.942548 1024596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:26:12.943100 1024596 out.go:177] * Verifying Kubernetes components...
	I0729 14:26:12.944453 1024596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:26:12.957866 1024596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33971
	I0729 14:26:12.958370 1024596 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:26:12.958941 1024596 main.go:141] libmachine: Using API Version  1
	I0729 14:26:12.958968 1024596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:26:12.959030 1024596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37931
	I0729 14:26:12.959391 1024596 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:26:12.959500 1024596 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:26:12.959863 1024596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:26:12.959909 1024596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:26:12.960328 1024596 main.go:141] libmachine: Using API Version  1
	I0729 14:26:12.960347 1024596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:26:12.960729 1024596 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:26:12.960951 1024596 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetState
	I0729 14:26:12.963750 1024596 kapi.go:59] client config for kubernetes-upgrade-050658: &rest.Config{Host:"https://192.168.39.73:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/client.crt", KeyFile:"/home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kubernetes-upgrade-050658/client.key", CAFile:"/home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 14:26:12.964037 1024596 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-050658"
	W0729 14:26:12.964052 1024596 addons.go:243] addon default-storageclass should already be in state true
	I0729 14:26:12.964076 1024596 host.go:66] Checking if "kubernetes-upgrade-050658" exists ...
	I0729 14:26:12.964311 1024596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:26:12.964346 1024596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:26:12.976031 1024596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33393
	I0729 14:26:12.976570 1024596 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:26:12.977141 1024596 main.go:141] libmachine: Using API Version  1
	I0729 14:26:12.977163 1024596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:26:12.977517 1024596 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:26:12.977740 1024596 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetState
	I0729 14:26:12.979499 1024596 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .DriverName
	I0729 14:26:12.980327 1024596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
	I0729 14:26:12.980739 1024596 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:26:12.981267 1024596 main.go:141] libmachine: Using API Version  1
	I0729 14:26:12.981286 1024596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:26:12.981460 1024596 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:26:12.981700 1024596 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:26:12.982439 1024596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:26:12.982481 1024596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:26:12.983088 1024596 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:26:12.983110 1024596 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 14:26:12.983134 1024596 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHHostname
	I0729 14:26:12.986394 1024596 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:26:12.986904 1024596 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:e3:55", ip: ""} in network mk-kubernetes-upgrade-050658: {Iface:virbr3 ExpiryTime:2024-07-29 15:24:22 +0000 UTC Type:0 Mac:52:54:00:c3:e3:55 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:kubernetes-upgrade-050658 Clientid:01:52:54:00:c3:e3:55}
	I0729 14:26:12.986934 1024596 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined IP address 192.168.39.73 and MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:26:12.987189 1024596 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHPort
	I0729 14:26:12.987374 1024596 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHKeyPath
	I0729 14:26:12.987579 1024596 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHUsername
	I0729 14:26:12.987726 1024596 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/kubernetes-upgrade-050658/id_rsa Username:docker}
	I0729 14:26:12.999251 1024596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36983
	I0729 14:26:12.999624 1024596 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:26:13.000203 1024596 main.go:141] libmachine: Using API Version  1
	I0729 14:26:13.000231 1024596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:26:13.000728 1024596 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:26:13.001000 1024596 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetState
	I0729 14:26:13.002498 1024596 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .DriverName
	I0729 14:26:13.002739 1024596 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 14:26:13.002758 1024596 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 14:26:13.002783 1024596 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHHostname
	I0729 14:26:13.005669 1024596 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:26:13.006038 1024596 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:e3:55", ip: ""} in network mk-kubernetes-upgrade-050658: {Iface:virbr3 ExpiryTime:2024-07-29 15:24:22 +0000 UTC Type:0 Mac:52:54:00:c3:e3:55 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:kubernetes-upgrade-050658 Clientid:01:52:54:00:c3:e3:55}
	I0729 14:26:13.006067 1024596 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | domain kubernetes-upgrade-050658 has defined IP address 192.168.39.73 and MAC address 52:54:00:c3:e3:55 in network mk-kubernetes-upgrade-050658
	I0729 14:26:13.006292 1024596 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHPort
	I0729 14:26:13.006431 1024596 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHKeyPath
	I0729 14:26:13.006573 1024596 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .GetSSHUsername
	I0729 14:26:13.006716 1024596 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/kubernetes-upgrade-050658/id_rsa Username:docker}
	I0729 14:26:13.117360 1024596 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:26:13.136255 1024596 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:26:13.136335 1024596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:26:13.149867 1024596 api_server.go:72] duration metric: took 208.116204ms to wait for apiserver process to appear ...
	I0729 14:26:13.149902 1024596 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:26:13.149930 1024596 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8443/healthz ...
	I0729 14:26:13.154134 1024596 api_server.go:279] https://192.168.39.73:8443/healthz returned 200:
	ok
	I0729 14:26:13.155308 1024596 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 14:26:13.155331 1024596 api_server.go:131] duration metric: took 5.420611ms to wait for apiserver health ...
	I0729 14:26:13.155341 1024596 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:26:13.162041 1024596 system_pods.go:59] 8 kube-system pods found
	I0729 14:26:13.162068 1024596 system_pods.go:61] "coredns-5cfdc65f69-dz5lj" [07b93682-b9c2-4d8c-a722-4782ac979449] Running
	I0729 14:26:13.162075 1024596 system_pods.go:61] "coredns-5cfdc65f69-kbth5" [762e5cf2-8e5d-4a11-b412-9fdb7912ee51] Running
	I0729 14:26:13.162083 1024596 system_pods.go:61] "etcd-kubernetes-upgrade-050658" [9fc5925b-e932-4693-a1d8-f394e78cf5b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 14:26:13.162092 1024596 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-050658" [0fbc72d4-98a4-49a6-b947-501df0f71dd2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 14:26:13.162102 1024596 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-050658" [ebdf1d70-6d26-4fe9-b720-a186e8ed6712] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 14:26:13.162108 1024596 system_pods.go:61] "kube-proxy-mw5bm" [85996dda-66e6-47fa-86ca-2d78b4316af4] Running
	I0729 14:26:13.162117 1024596 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-050658" [b3fae87f-319f-47f2-8d05-94a951619cbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 14:26:13.162127 1024596 system_pods.go:61] "storage-provisioner" [053f9f9e-0322-4e1b-b1d3-560c6baa7479] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 14:26:13.162138 1024596 system_pods.go:74] duration metric: took 6.790519ms to wait for pod list to return data ...
	I0729 14:26:13.162152 1024596 kubeadm.go:582] duration metric: took 220.412546ms to wait for: map[apiserver:true system_pods:true]
	I0729 14:26:13.162172 1024596 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:26:13.165617 1024596 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:26:13.165641 1024596 node_conditions.go:123] node cpu capacity is 2
	I0729 14:26:13.165655 1024596 node_conditions.go:105] duration metric: took 3.477879ms to run NodePressure ...
	I0729 14:26:13.165670 1024596 start.go:241] waiting for startup goroutines ...
	I0729 14:26:13.267976 1024596 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:26:13.296898 1024596 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 14:26:14.060068 1024596 main.go:141] libmachine: Making call to close driver server
	I0729 14:26:14.060099 1024596 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .Close
	I0729 14:26:14.060125 1024596 main.go:141] libmachine: Making call to close driver server
	I0729 14:26:14.060145 1024596 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .Close
	I0729 14:26:14.060427 1024596 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:26:14.060444 1024596 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:26:14.060454 1024596 main.go:141] libmachine: Making call to close driver server
	I0729 14:26:14.060463 1024596 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .Close
	I0729 14:26:14.060606 1024596 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:26:14.060628 1024596 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:26:14.060638 1024596 main.go:141] libmachine: Making call to close driver server
	I0729 14:26:14.060655 1024596 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .Close
	I0729 14:26:14.060871 1024596 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:26:14.060890 1024596 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:26:14.060936 1024596 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:26:14.060952 1024596 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:26:14.070918 1024596 main.go:141] libmachine: Making call to close driver server
	I0729 14:26:14.070938 1024596 main.go:141] libmachine: (kubernetes-upgrade-050658) Calling .Close
	I0729 14:26:14.071170 1024596 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:26:14.071188 1024596 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:26:14.071262 1024596 main.go:141] libmachine: (kubernetes-upgrade-050658) DBG | Closing plugin on server side
	I0729 14:26:14.073436 1024596 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0729 14:26:14.074885 1024596 addons.go:510] duration metric: took 1.133118093s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0729 14:26:14.074917 1024596 start.go:246] waiting for cluster config update ...
	I0729 14:26:14.074931 1024596 start.go:255] writing updated cluster config ...
	I0729 14:26:14.075147 1024596 ssh_runner.go:195] Run: rm -f paused
	I0729 14:26:14.126360 1024596 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 14:26:14.128241 1024596 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-050658" cluster and "default" namespace by default
	I0729 14:26:11.186294 1026424 main.go:141] libmachine: (calico-513289) DBG | domain calico-513289 has defined MAC address 52:54:00:e1:44:9a in network mk-calico-513289
	I0729 14:26:11.186875 1026424 main.go:141] libmachine: (calico-513289) DBG | unable to find current IP address of domain calico-513289 in network mk-calico-513289
	I0729 14:26:11.186899 1026424 main.go:141] libmachine: (calico-513289) DBG | I0729 14:26:11.186834 1026447 retry.go:31] will retry after 3.393412276s: waiting for machine to come up
	
	
	==> CRI-O <==
	Jul 29 14:26:14 kubernetes-upgrade-050658 crio[3054]: time="2024-07-29 14:26:14.905602000Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e872cf2f-1dfd-4185-afd2-0d6c414c2b03 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:26:14 kubernetes-upgrade-050658 crio[3054]: time="2024-07-29 14:26:14.907839275Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d69d04f-b811-4a79-9116-216b0cf258d8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:26:14 kubernetes-upgrade-050658 crio[3054]: time="2024-07-29 14:26:14.908232470Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722263174908208897,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d69d04f-b811-4a79-9116-216b0cf258d8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:26:14 kubernetes-upgrade-050658 crio[3054]: time="2024-07-29 14:26:14.908922202Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00f54c70-8bbf-49f4-8409-7b78c5fe8943 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:26:14 kubernetes-upgrade-050658 crio[3054]: time="2024-07-29 14:26:14.908976830Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00f54c70-8bbf-49f4-8409-7b78c5fe8943 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:26:14 kubernetes-upgrade-050658 crio[3054]: time="2024-07-29 14:26:14.909327965Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a45f80fdd7b936cac6b6b8ddb87b0543e78d0d9bff63438abbfb5306f96c4c9,PodSandboxId:571b75b15f30a08ed9956b67b9f9ee49825cf59bcb2bcd67dbc8715ba6834162,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722263172245016606,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053f9f9e-0322-4e1b-b1d3-560c6baa7479,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1b8eb8154efa1a7e5287425ee3f23a2571875777377063619298b44f7f0c50,PodSandboxId:8517d69150feb8e782a73691b48b0c41d8c0619719e36c9e17796c912b0f727d,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722263167508431291,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5c97892e0b0d7a1087c436a61bdbd3,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba2b3c55dc38e729b8d7a35e2ae2f3d4968490fb338bf858876d15a1d37d59e,PodSandboxId:9e2cf22f60e89da071c9ec99f94b40dcccfbb7898c7c30e8a811ba90f03d26ae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722263167498863194,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e93513898be23a065464b7bfb5903f99,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 3,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f05f15fbf3868951cedfcef501ff68efde2a560836b8ebb18f48a8fb7211331,PodSandboxId:7941f9659243fc85562d1341899035de80ef2f6c9c22fc3cc1336ff72765fc6d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722263167510623758,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 970ffa0c455ee6d1ef5a18b59f0fc61e,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:836c1ee6d54a7520e640ce078d61cf908cb731bd4f3493cc2c4659c6ff3643d1,PodSandboxId:d41d29b0bdefc03a8c368817f6bc176a3b60c14f24114bb566485ff068f50720,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722263167488814669,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50bcb1ab133bedfb70d6711c351c509e,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d555c8bf833f9558c64578ae52ce0320aff366fb7e8ef354b242ac7e79e2645,PodSandboxId:9f7c728258859335f7600bef04de3325a2fd6fc852a6db0b19a0b1ac07ba894b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722263159442493661,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-dz5lj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b93682-b9c2-4d8c-a722-4782ac979449,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a26f38c3391a73ee0e34d3e4f0538d6035ba8e0577be04295cf681676c11dd,PodSandboxId:cbcf456c14429063b49af6db2a391356b7bd74f32f1878edcad23df4d9dcee5d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722263143939260404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mw5bm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85996dda-66e6-47fa-
86ca-2d78b4316af4,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6341a1162e640a7d9fed5628047a4ef0d10d1330dff3e6a60f1d6ca949d2330,PodSandboxId:b772359368f88aee8fb42fa26fc5d1cdc001cf2c1eb7674167e3dccb3091baa9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722263145095709682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-kbth5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 762e5cf2-8e5d-4a11-b412-9fdb7912ee51,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a21dc2c9549e3c72cca3dfef962a5758c0b26521b03182125d46169ad815f3,PodSandboxId:7941f9659243fc85562d1341899035de80ef2f6c9c22fc3cc1336ff72765fc6d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722263143885123222,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 970ffa0c455ee6d1ef5a18b59f0fc61e,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ddac43d407a206083c3e393a14a719c6e769d9fd07d771f0f66494019bfb475,PodSandboxId:9e2cf22f60e89da071c9ec99f94b40dcccfbb7898c7c30e8a811ba90f03d26ae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722263143904439493,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e93513898be23a065464b7bfb5903f99,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b901340feaeb2a94129590cb225a146d43cad0c4f0b1c5fdab264d63abc9700d,PodSandboxId:8517d69150feb8e782a73691b48b0c41d8c0619719e36c9e17796c912b0f727d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722263143943367767,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5c97892e0b0d7a1087c436a61bdbd3,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c3a6264fd3d1f8e3a862ea9575a84f5cfe475df34d47d83883d4b9861f5f93f,PodSandboxId:d41d29b0bdefc03a8c368817f6bc176a3b60c14f24114bb566485ff068f50720,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722263143669609875,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50bcb1ab133bedfb70d6711c351c509e,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cd0fe0ac15ddbac71aa78a2c733c91fe63a317c22e6ccfebbd94041328b519a,PodSandboxId:571b75b15f30a08ed9956b67b9f9ee49825cf59bcb2bcd67dbc8715ba6834162,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722263143506657137,Labels:map[string]string{io.kubernetes.container
.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053f9f9e-0322-4e1b-b1d3-560c6baa7479,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3360eb56e85327d1c42afddce5e319061dfb553f387e79cbac4fb8a5833f2273,PodSandboxId:8ddd7368cdaf786f2278ecb4fbab0867b2710ec343a774edf52ae3811a427ac4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722263131575941009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kuberne
tes.pod.name: coredns-5cfdc65f69-kbth5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 762e5cf2-8e5d-4a11-b412-9fdb7912ee51,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7751923b3672228b20fe087a7c4af67cb235b08c990075e685c5cdeede70361b,PodSandboxId:fdc444c43f686e48a5e150444e5430ec08610e1d97156e239ec6f1cd2eb1379a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722263130847521697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mw5bm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85996dda-66e6-47fa-86ca-2d78b4316af4,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fec293b341a8e7f87466661434529bf20173a03b975d66cb2fa4cdf27540529d,PodSandboxId:d72c4a6f7af8cc636c6e9c78b654fbb1be8921d98fe8f3ffe8bb71e0f674f299,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722263131240202154,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-dz5lj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b93682-b9c2-4d8c-a722-4782ac979449,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=00f54c70-8bbf-49f4-8409-7b78c5fe8943 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:26:14 kubernetes-upgrade-050658 crio[3054]: time="2024-07-29 14:26:14.962906964Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1d97125d-675a-4dae-86cb-f0a69cc482db name=/runtime.v1.RuntimeService/Version
	Jul 29 14:26:14 kubernetes-upgrade-050658 crio[3054]: time="2024-07-29 14:26:14.963015358Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1d97125d-675a-4dae-86cb-f0a69cc482db name=/runtime.v1.RuntimeService/Version
	Jul 29 14:26:14 kubernetes-upgrade-050658 crio[3054]: time="2024-07-29 14:26:14.972700087Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f25f9647-db8e-4c3f-b9db-f367d9d57316 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:26:14 kubernetes-upgrade-050658 crio[3054]: time="2024-07-29 14:26:14.973177020Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722263174973152196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f25f9647-db8e-4c3f-b9db-f367d9d57316 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:26:14 kubernetes-upgrade-050658 crio[3054]: time="2024-07-29 14:26:14.973796045Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95d2eeca-8d84-4aed-8885-797561cba41b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:26:14 kubernetes-upgrade-050658 crio[3054]: time="2024-07-29 14:26:14.973868483Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95d2eeca-8d84-4aed-8885-797561cba41b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:26:14 kubernetes-upgrade-050658 crio[3054]: time="2024-07-29 14:26:14.974242509Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a45f80fdd7b936cac6b6b8ddb87b0543e78d0d9bff63438abbfb5306f96c4c9,PodSandboxId:571b75b15f30a08ed9956b67b9f9ee49825cf59bcb2bcd67dbc8715ba6834162,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722263172245016606,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053f9f9e-0322-4e1b-b1d3-560c6baa7479,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1b8eb8154efa1a7e5287425ee3f23a2571875777377063619298b44f7f0c50,PodSandboxId:8517d69150feb8e782a73691b48b0c41d8c0619719e36c9e17796c912b0f727d,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722263167508431291,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5c97892e0b0d7a1087c436a61bdbd3,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba2b3c55dc38e729b8d7a35e2ae2f3d4968490fb338bf858876d15a1d37d59e,PodSandboxId:9e2cf22f60e89da071c9ec99f94b40dcccfbb7898c7c30e8a811ba90f03d26ae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722263167498863194,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e93513898be23a065464b7bfb5903f99,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 3,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f05f15fbf3868951cedfcef501ff68efde2a560836b8ebb18f48a8fb7211331,PodSandboxId:7941f9659243fc85562d1341899035de80ef2f6c9c22fc3cc1336ff72765fc6d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722263167510623758,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 970ffa0c455ee6d1ef5a18b59f0fc61e,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:836c1ee6d54a7520e640ce078d61cf908cb731bd4f3493cc2c4659c6ff3643d1,PodSandboxId:d41d29b0bdefc03a8c368817f6bc176a3b60c14f24114bb566485ff068f50720,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722263167488814669,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50bcb1ab133bedfb70d6711c351c509e,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d555c8bf833f9558c64578ae52ce0320aff366fb7e8ef354b242ac7e79e2645,PodSandboxId:9f7c728258859335f7600bef04de3325a2fd6fc852a6db0b19a0b1ac07ba894b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722263159442493661,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-dz5lj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b93682-b9c2-4d8c-a722-4782ac979449,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a26f38c3391a73ee0e34d3e4f0538d6035ba8e0577be04295cf681676c11dd,PodSandboxId:cbcf456c14429063b49af6db2a391356b7bd74f32f1878edcad23df4d9dcee5d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722263143939260404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mw5bm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85996dda-66e6-47fa-
86ca-2d78b4316af4,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6341a1162e640a7d9fed5628047a4ef0d10d1330dff3e6a60f1d6ca949d2330,PodSandboxId:b772359368f88aee8fb42fa26fc5d1cdc001cf2c1eb7674167e3dccb3091baa9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722263145095709682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-kbth5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 762e5cf2-8e5d-4a11-b412-9fdb7912ee51,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a21dc2c9549e3c72cca3dfef962a5758c0b26521b03182125d46169ad815f3,PodSandboxId:7941f9659243fc85562d1341899035de80ef2f6c9c22fc3cc1336ff72765fc6d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722263143885123222,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 970ffa0c455ee6d1ef5a18b59f0fc61e,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ddac43d407a206083c3e393a14a719c6e769d9fd07d771f0f66494019bfb475,PodSandboxId:9e2cf22f60e89da071c9ec99f94b40dcccfbb7898c7c30e8a811ba90f03d26ae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722263143904439493,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e93513898be23a065464b7bfb5903f99,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b901340feaeb2a94129590cb225a146d43cad0c4f0b1c5fdab264d63abc9700d,PodSandboxId:8517d69150feb8e782a73691b48b0c41d8c0619719e36c9e17796c912b0f727d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722263143943367767,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5c97892e0b0d7a1087c436a61bdbd3,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c3a6264fd3d1f8e3a862ea9575a84f5cfe475df34d47d83883d4b9861f5f93f,PodSandboxId:d41d29b0bdefc03a8c368817f6bc176a3b60c14f24114bb566485ff068f50720,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722263143669609875,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50bcb1ab133bedfb70d6711c351c509e,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cd0fe0ac15ddbac71aa78a2c733c91fe63a317c22e6ccfebbd94041328b519a,PodSandboxId:571b75b15f30a08ed9956b67b9f9ee49825cf59bcb2bcd67dbc8715ba6834162,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722263143506657137,Labels:map[string]string{io.kubernetes.container
.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053f9f9e-0322-4e1b-b1d3-560c6baa7479,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3360eb56e85327d1c42afddce5e319061dfb553f387e79cbac4fb8a5833f2273,PodSandboxId:8ddd7368cdaf786f2278ecb4fbab0867b2710ec343a774edf52ae3811a427ac4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722263131575941009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kuberne
tes.pod.name: coredns-5cfdc65f69-kbth5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 762e5cf2-8e5d-4a11-b412-9fdb7912ee51,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7751923b3672228b20fe087a7c4af67cb235b08c990075e685c5cdeede70361b,PodSandboxId:fdc444c43f686e48a5e150444e5430ec08610e1d97156e239ec6f1cd2eb1379a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722263130847521697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mw5bm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85996dda-66e6-47fa-86ca-2d78b4316af4,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fec293b341a8e7f87466661434529bf20173a03b975d66cb2fa4cdf27540529d,PodSandboxId:d72c4a6f7af8cc636c6e9c78b654fbb1be8921d98fe8f3ffe8bb71e0f674f299,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722263131240202154,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-dz5lj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b93682-b9c2-4d8c-a722-4782ac979449,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=95d2eeca-8d84-4aed-8885-797561cba41b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:26:15 kubernetes-upgrade-050658 crio[3054]: time="2024-07-29 14:26:15.028004085Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e99d96f7-34ca-4c4d-a7e9-0c5ac6975b26 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:26:15 kubernetes-upgrade-050658 crio[3054]: time="2024-07-29 14:26:15.028090670Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e99d96f7-34ca-4c4d-a7e9-0c5ac6975b26 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:26:15 kubernetes-upgrade-050658 crio[3054]: time="2024-07-29 14:26:15.029889524Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d9814de-c9c9-4f7c-9528-bc87f971ff46 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:26:15 kubernetes-upgrade-050658 crio[3054]: time="2024-07-29 14:26:15.030260492Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722263175030238559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d9814de-c9c9-4f7c-9528-bc87f971ff46 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:26:15 kubernetes-upgrade-050658 crio[3054]: time="2024-07-29 14:26:15.030906207Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e6da7fe6-6456-4a8b-8174-011b20472498 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:26:15 kubernetes-upgrade-050658 crio[3054]: time="2024-07-29 14:26:15.030976856Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e6da7fe6-6456-4a8b-8174-011b20472498 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:26:15 kubernetes-upgrade-050658 crio[3054]: time="2024-07-29 14:26:15.031282385Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a45f80fdd7b936cac6b6b8ddb87b0543e78d0d9bff63438abbfb5306f96c4c9,PodSandboxId:571b75b15f30a08ed9956b67b9f9ee49825cf59bcb2bcd67dbc8715ba6834162,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722263172245016606,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053f9f9e-0322-4e1b-b1d3-560c6baa7479,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1b8eb8154efa1a7e5287425ee3f23a2571875777377063619298b44f7f0c50,PodSandboxId:8517d69150feb8e782a73691b48b0c41d8c0619719e36c9e17796c912b0f727d,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722263167508431291,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5c97892e0b0d7a1087c436a61bdbd3,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba2b3c55dc38e729b8d7a35e2ae2f3d4968490fb338bf858876d15a1d37d59e,PodSandboxId:9e2cf22f60e89da071c9ec99f94b40dcccfbb7898c7c30e8a811ba90f03d26ae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722263167498863194,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e93513898be23a065464b7bfb5903f99,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 3,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f05f15fbf3868951cedfcef501ff68efde2a560836b8ebb18f48a8fb7211331,PodSandboxId:7941f9659243fc85562d1341899035de80ef2f6c9c22fc3cc1336ff72765fc6d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722263167510623758,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 970ffa0c455ee6d1ef5a18b59f0fc61e,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:836c1ee6d54a7520e640ce078d61cf908cb731bd4f3493cc2c4659c6ff3643d1,PodSandboxId:d41d29b0bdefc03a8c368817f6bc176a3b60c14f24114bb566485ff068f50720,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722263167488814669,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50bcb1ab133bedfb70d6711c351c509e,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d555c8bf833f9558c64578ae52ce0320aff366fb7e8ef354b242ac7e79e2645,PodSandboxId:9f7c728258859335f7600bef04de3325a2fd6fc852a6db0b19a0b1ac07ba894b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722263159442493661,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-dz5lj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b93682-b9c2-4d8c-a722-4782ac979449,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a26f38c3391a73ee0e34d3e4f0538d6035ba8e0577be04295cf681676c11dd,PodSandboxId:cbcf456c14429063b49af6db2a391356b7bd74f32f1878edcad23df4d9dcee5d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722263143939260404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mw5bm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85996dda-66e6-47fa-
86ca-2d78b4316af4,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6341a1162e640a7d9fed5628047a4ef0d10d1330dff3e6a60f1d6ca949d2330,PodSandboxId:b772359368f88aee8fb42fa26fc5d1cdc001cf2c1eb7674167e3dccb3091baa9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722263145095709682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-kbth5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 762e5cf2-8e5d-4a11-b412-9fdb7912ee51,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a21dc2c9549e3c72cca3dfef962a5758c0b26521b03182125d46169ad815f3,PodSandboxId:7941f9659243fc85562d1341899035de80ef2f6c9c22fc3cc1336ff72765fc6d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722263143885123222,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 970ffa0c455ee6d1ef5a18b59f0fc61e,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ddac43d407a206083c3e393a14a719c6e769d9fd07d771f0f66494019bfb475,PodSandboxId:9e2cf22f60e89da071c9ec99f94b40dcccfbb7898c7c30e8a811ba90f03d26ae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722263143904439493,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e93513898be23a065464b7bfb5903f99,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b901340feaeb2a94129590cb225a146d43cad0c4f0b1c5fdab264d63abc9700d,PodSandboxId:8517d69150feb8e782a73691b48b0c41d8c0619719e36c9e17796c912b0f727d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722263143943367767,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5c97892e0b0d7a1087c436a61bdbd3,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c3a6264fd3d1f8e3a862ea9575a84f5cfe475df34d47d83883d4b9861f5f93f,PodSandboxId:d41d29b0bdefc03a8c368817f6bc176a3b60c14f24114bb566485ff068f50720,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722263143669609875,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50bcb1ab133bedfb70d6711c351c509e,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cd0fe0ac15ddbac71aa78a2c733c91fe63a317c22e6ccfebbd94041328b519a,PodSandboxId:571b75b15f30a08ed9956b67b9f9ee49825cf59bcb2bcd67dbc8715ba6834162,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722263143506657137,Labels:map[string]string{io.kubernetes.container
.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053f9f9e-0322-4e1b-b1d3-560c6baa7479,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3360eb56e85327d1c42afddce5e319061dfb553f387e79cbac4fb8a5833f2273,PodSandboxId:8ddd7368cdaf786f2278ecb4fbab0867b2710ec343a774edf52ae3811a427ac4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722263131575941009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kuberne
tes.pod.name: coredns-5cfdc65f69-kbth5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 762e5cf2-8e5d-4a11-b412-9fdb7912ee51,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7751923b3672228b20fe087a7c4af67cb235b08c990075e685c5cdeede70361b,PodSandboxId:fdc444c43f686e48a5e150444e5430ec08610e1d97156e239ec6f1cd2eb1379a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722263130847521697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mw5bm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85996dda-66e6-47fa-86ca-2d78b4316af4,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fec293b341a8e7f87466661434529bf20173a03b975d66cb2fa4cdf27540529d,PodSandboxId:d72c4a6f7af8cc636c6e9c78b654fbb1be8921d98fe8f3ffe8bb71e0f674f299,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722263131240202154,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-dz5lj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b93682-b9c2-4d8c-a722-4782ac979449,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e6da7fe6-6456-4a8b-8174-011b20472498 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:26:15 kubernetes-upgrade-050658 crio[3054]: time="2024-07-29 14:26:15.043833598Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce58e85c-cc8b-4a47-934d-2ca2bd9adf57 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 14:26:15 kubernetes-upgrade-050658 crio[3054]: time="2024-07-29 14:26:15.044039379Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:9f7c728258859335f7600bef04de3325a2fd6fc852a6db0b19a0b1ac07ba894b,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-dz5lj,Uid:07b93682-b9c2-4d8c-a722-4782ac979449,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722263143744266286,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-dz5lj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b93682-b9c2-4d8c-a722-4782ac979449,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T14:24:53.164034614Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b772359368f88aee8fb42fa26fc5d1cdc001cf2c1eb7674167e3dccb3091baa9,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-kbth5,Uid:762e5cf2-8e5d-4a11-b412-9fdb7912ee51,Namespac
e:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722263143639532570,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-kbth5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 762e5cf2-8e5d-4a11-b412-9fdb7912ee51,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T14:24:53.208310749Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7941f9659243fc85562d1341899035de80ef2f6c9c22fc3cc1336ff72765fc6d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-050658,Uid:970ffa0c455ee6d1ef5a18b59f0fc61e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722263143385573465,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 970ffa0c455ee6d1ef5a18b59f0fc61e,tier: control-plane,},Ann
otations:map[string]string{kubernetes.io/config.hash: 970ffa0c455ee6d1ef5a18b59f0fc61e,kubernetes.io/config.seen: 2024-07-29T14:24:38.377841228Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9e2cf22f60e89da071c9ec99f94b40dcccfbb7898c7c30e8a811ba90f03d26ae,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-050658,Uid:e93513898be23a065464b7bfb5903f99,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722263143375927828,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e93513898be23a065464b7bfb5903f99,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e93513898be23a065464b7bfb5903f99,kubernetes.io/config.seen: 2024-07-29T14:24:38.377846266Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8517d69150feb8e782a73691b48b0c41d8c0
619719e36c9e17796c912b0f727d,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-050658,Uid:8d5c97892e0b0d7a1087c436a61bdbd3,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722263143362498219,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5c97892e0b0d7a1087c436a61bdbd3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.73:2379,kubernetes.io/config.hash: 8d5c97892e0b0d7a1087c436a61bdbd3,kubernetes.io/config.seen: 2024-07-29T14:24:38.406626310Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cbcf456c14429063b49af6db2a391356b7bd74f32f1878edcad23df4d9dcee5d,Metadata:&PodSandboxMetadata{Name:kube-proxy-mw5bm,Uid:85996dda-66e6-47fa-86ca-2d78b4316af4,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722263143296589772,Labels:map[string]string{
controller-revision-hash: 6558c48888,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-mw5bm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85996dda-66e6-47fa-86ca-2d78b4316af4,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T14:24:53.205387700Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d41d29b0bdefc03a8c368817f6bc176a3b60c14f24114bb566485ff068f50720,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-050658,Uid:50bcb1ab133bedfb70d6711c351c509e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722263143258249544,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50bcb1ab133bedfb70d6711c351c509e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise
-address.endpoint: 192.168.39.73:8443,kubernetes.io/config.hash: 50bcb1ab133bedfb70d6711c351c509e,kubernetes.io/config.seen: 2024-07-29T14:24:38.377844948Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:571b75b15f30a08ed9956b67b9f9ee49825cf59bcb2bcd67dbc8715ba6834162,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:053f9f9e-0322-4e1b-b1d3-560c6baa7479,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722263143142223480,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053f9f9e-0322-4e1b-b1d3-560c6baa7479,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\
":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T14:24:54.719246997Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ce58e85c-cc8b-4a47-934d-2ca2bd9adf57 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 14:26:15 kubernetes-upgrade-050658 crio[3054]: time="2024-07-29 14:26:15.044773851Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a328906-dd91-4044-a448-f6834cfe28b8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:26:15 kubernetes-upgrade-050658 crio[3054]: time="2024-07-29 14:26:15.044848141Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a328906-dd91-4044-a448-f6834cfe28b8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:26:15 kubernetes-upgrade-050658 crio[3054]: time="2024-07-29 14:26:15.045038423Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a45f80fdd7b936cac6b6b8ddb87b0543e78d0d9bff63438abbfb5306f96c4c9,PodSandboxId:571b75b15f30a08ed9956b67b9f9ee49825cf59bcb2bcd67dbc8715ba6834162,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722263172245016606,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053f9f9e-0322-4e1b-b1d3-560c6baa7479,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1b8eb8154efa1a7e5287425ee3f23a2571875777377063619298b44f7f0c50,PodSandboxId:8517d69150feb8e782a73691b48b0c41d8c0619719e36c9e17796c912b0f727d,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722263167508431291,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5c97892e0b0d7a1087c436a61bdbd3,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba2b3c55dc38e729b8d7a35e2ae2f3d4968490fb338bf858876d15a1d37d59e,PodSandboxId:9e2cf22f60e89da071c9ec99f94b40dcccfbb7898c7c30e8a811ba90f03d26ae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722263167498863194,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e93513898be23a065464b7bfb5903f99,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 3,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f05f15fbf3868951cedfcef501ff68efde2a560836b8ebb18f48a8fb7211331,PodSandboxId:7941f9659243fc85562d1341899035de80ef2f6c9c22fc3cc1336ff72765fc6d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722263167510623758,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 970ffa0c455ee6d1ef5a18b59f0fc61e,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:836c1ee6d54a7520e640ce078d61cf908cb731bd4f3493cc2c4659c6ff3643d1,PodSandboxId:d41d29b0bdefc03a8c368817f6bc176a3b60c14f24114bb566485ff068f50720,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722263167488814669,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-050658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50bcb1ab133bedfb70d6711c351c509e,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d555c8bf833f9558c64578ae52ce0320aff366fb7e8ef354b242ac7e79e2645,PodSandboxId:9f7c728258859335f7600bef04de3325a2fd6fc852a6db0b19a0b1ac07ba894b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722263159442493661,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-dz5lj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b93682-b9c2-4d8c-a722-4782ac979449,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\
"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a26f38c3391a73ee0e34d3e4f0538d6035ba8e0577be04295cf681676c11dd,PodSandboxId:cbcf456c14429063b49af6db2a391356b7bd74f32f1878edcad23df4d9dcee5d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722263143939260404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mw5bm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85996dda-66e6-47fa-
86ca-2d78b4316af4,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6341a1162e640a7d9fed5628047a4ef0d10d1330dff3e6a60f1d6ca949d2330,PodSandboxId:b772359368f88aee8fb42fa26fc5d1cdc001cf2c1eb7674167e3dccb3091baa9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722263145095709682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-kbth5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 762e5cf2-8e5d-4a11-b412-9fdb7912ee51,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a328906-dd91-4044-a448-f6834cfe28b8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9a45f80fdd7b9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   2 seconds ago       Running             storage-provisioner       3                   571b75b15f30a       storage-provisioner
	9f05f15fbf386       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   7 seconds ago       Running             kube-scheduler            3                   7941f9659243f       kube-scheduler-kubernetes-upgrade-050658
	bb1b8eb8154ef       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   7 seconds ago       Running             etcd                      3                   8517d69150feb       etcd-kubernetes-upgrade-050658
	cba2b3c55dc38       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   7 seconds ago       Running             kube-controller-manager   3                   9e2cf22f60e89       kube-controller-manager-kubernetes-upgrade-050658
	836c1ee6d54a7       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   7 seconds ago       Running             kube-apiserver            3                   d41d29b0bdefc       kube-apiserver-kubernetes-upgrade-050658
	8d555c8bf833f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 seconds ago      Running             coredns                   2                   9f7c728258859       coredns-5cfdc65f69-dz5lj
	b6341a1162e64       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   30 seconds ago      Running             coredns                   2                   b772359368f88       coredns-5cfdc65f69-kbth5
	b901340feaeb2       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   31 seconds ago      Exited              etcd                      2                   8517d69150feb       etcd-kubernetes-upgrade-050658
	33a26f38c3391       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   31 seconds ago      Running             kube-proxy                2                   cbcf456c14429       kube-proxy-mw5bm
	8ddac43d407a2       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   31 seconds ago      Exited              kube-controller-manager   2                   9e2cf22f60e89       kube-controller-manager-kubernetes-upgrade-050658
	a3a21dc2c9549       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   31 seconds ago      Exited              kube-scheduler            2                   7941f9659243f       kube-scheduler-kubernetes-upgrade-050658
	9c3a6264fd3d1       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   31 seconds ago      Exited              kube-apiserver            2                   d41d29b0bdefc       kube-apiserver-kubernetes-upgrade-050658
	1cd0fe0ac15dd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   31 seconds ago      Exited              storage-provisioner       2                   571b75b15f30a       storage-provisioner
	3360eb56e8532       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   43 seconds ago      Exited              coredns                   1                   8ddd7368cdaf7       coredns-5cfdc65f69-kbth5
	fec293b341a8e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   43 seconds ago      Exited              coredns                   1                   d72c4a6f7af8c       coredns-5cfdc65f69-dz5lj
	7751923b36722       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   44 seconds ago      Exited              kube-proxy                1                   fdc444c43f686       kube-proxy-mw5bm
	
	
	==> coredns [3360eb56e85327d1c42afddce5e319061dfb553f387e79cbac4fb8a5833f2273] <==
	
	
	==> coredns [8d555c8bf833f9558c64578ae52ce0320aff366fb7e8ef354b242ac7e79e2645] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:58234->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:58234->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:58246->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:58220->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:58220->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:58246->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [b6341a1162e640a7d9fed5628047a4ef0d10d1330dff3e6a60f1d6ca949d2330] <==
	Trace[171352351]: [10.001262463s] [10.001262463s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1005436819]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 14:25:45.403) (total time: 10001ms):
	Trace[1005436819]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:25:55.404)
	Trace[1005436819]: [10.001847193s] [10.001847193s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[157336376]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 14:25:45.400) (total time: 10004ms):
	Trace[157336376]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10004ms (14:25:55.405)
	Trace[157336376]: [10.004631495s] [10.004631495s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:44998->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:44998->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:44988->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:44988->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:44980->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:44980->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	
	
	==> coredns [fec293b341a8e7f87466661434529bf20173a03b975d66cb2fa4cdf27540529d] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-050658
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-050658
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 14:24:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-050658
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 14:26:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 14:26:11 +0000   Mon, 29 Jul 2024 14:24:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 14:26:11 +0000   Mon, 29 Jul 2024 14:24:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 14:26:11 +0000   Mon, 29 Jul 2024 14:24:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 14:26:11 +0000   Mon, 29 Jul 2024 14:24:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.73
	  Hostname:    kubernetes-upgrade-050658
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 922f66ff6cfa43a1af6268cf24b8027e
	  System UUID:                922f66ff-6cfa-43a1-af62-68cf24b8027e
	  Boot ID:                    16614f5a-5200-40d1-8de7-e489699b6644
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-dz5lj                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     82s
	  kube-system                 coredns-5cfdc65f69-kbth5                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     82s
	  kube-system                 etcd-kubernetes-upgrade-050658                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         80s
	  kube-system                 kube-apiserver-kubernetes-upgrade-050658             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-050658    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kube-proxy-mw5bm                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-scheduler-kubernetes-upgrade-050658             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 80s                kube-proxy       
	  Normal  Starting                 97s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  95s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  94s (x8 over 97s)  kubelet          Node kubernetes-upgrade-050658 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    94s (x8 over 97s)  kubelet          Node kubernetes-upgrade-050658 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     94s (x7 over 97s)  kubelet          Node kubernetes-upgrade-050658 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           83s                node-controller  Node kubernetes-upgrade-050658 event: Registered Node kubernetes-upgrade-050658 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-050658 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-050658 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet          Node kubernetes-upgrade-050658 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.080852] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.061432] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061069] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.203112] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.153215] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.268169] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +4.115229] systemd-fstab-generator[735]: Ignoring "noauto" option for root device
	[  +1.877578] systemd-fstab-generator[855]: Ignoring "noauto" option for root device
	[  +0.077979] kauditd_printk_skb: 158 callbacks suppressed
	[ +14.459543] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.611473] systemd-fstab-generator[1239]: Ignoring "noauto" option for root device
	[Jul29 14:25] kauditd_printk_skb: 97 callbacks suppressed
	[  +0.528778] systemd-fstab-generator[2467]: Ignoring "noauto" option for root device
	[  +0.281256] systemd-fstab-generator[2593]: Ignoring "noauto" option for root device
	[  +0.339803] systemd-fstab-generator[2717]: Ignoring "noauto" option for root device
	[  +0.327847] systemd-fstab-generator[2846]: Ignoring "noauto" option for root device
	[  +0.696700] systemd-fstab-generator[2991]: Ignoring "noauto" option for root device
	[ +10.998270] systemd-fstab-generator[3371]: Ignoring "noauto" option for root device
	[  +0.094389] kauditd_printk_skb: 206 callbacks suppressed
	[ +12.643524] kauditd_printk_skb: 123 callbacks suppressed
	[Jul29 14:26] systemd-fstab-generator[4407]: Ignoring "noauto" option for root device
	[  +5.609476] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.699952] systemd-fstab-generator[4786]: Ignoring "noauto" option for root device
	
	
	==> etcd [b901340feaeb2a94129590cb225a146d43cad0c4f0b1c5fdab264d63abc9700d] <==
	{"level":"info","ts":"2024-07-29T14:25:44.758389Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-29T14:25:44.791622Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"97141299b087eff6","local-member-id":"217be714ae9a82b8","commit-index":416}
	{"level":"info","ts":"2024-07-29T14:25:44.793119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-29T14:25:44.79763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 became follower at term 2"}
	{"level":"info","ts":"2024-07-29T14:25:44.797843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 217be714ae9a82b8 [peers: [], term: 2, commit: 416, applied: 0, lastindex: 416, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-29T14:25:44.803923Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-29T14:25:44.829599Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":403}
	{"level":"info","ts":"2024-07-29T14:25:44.853054Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-07-29T14:25:44.867621Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"217be714ae9a82b8","timeout":"7s"}
	{"level":"info","ts":"2024-07-29T14:25:44.876143Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"217be714ae9a82b8"}
	{"level":"info","ts":"2024-07-29T14:25:44.876242Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"217be714ae9a82b8","local-server-version":"3.5.14","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-29T14:25:44.876364Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T14:25:44.876512Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T14:25:44.87652Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T14:25:44.876598Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-29T14:25:44.878997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 switched to configuration voters=(2412776101401756344)"}
	{"level":"info","ts":"2024-07-29T14:25:44.879063Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"97141299b087eff6","local-member-id":"217be714ae9a82b8","added-peer-id":"217be714ae9a82b8","added-peer-peer-urls":["https://192.168.39.73:2380"]}
	{"level":"info","ts":"2024-07-29T14:25:44.879151Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"97141299b087eff6","local-member-id":"217be714ae9a82b8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:25:44.879173Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:25:44.897804Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T14:25:44.915448Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.39.73:2380"}
	{"level":"info","ts":"2024-07-29T14:25:44.915468Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.39.73:2380"}
	{"level":"info","ts":"2024-07-29T14:25:44.915393Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T14:25:44.931349Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"217be714ae9a82b8","initial-advertise-peer-urls":["https://192.168.39.73:2380"],"listen-peer-urls":["https://192.168.39.73:2380"],"advertise-client-urls":["https://192.168.39.73:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.73:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T14:25:44.931413Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> etcd [bb1b8eb8154efa1a7e5287425ee3f23a2571875777377063619298b44f7f0c50] <==
	{"level":"info","ts":"2024-07-29T14:26:08.3518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 switched to configuration voters=(2412776101401756344)"}
	{"level":"info","ts":"2024-07-29T14:26:08.355031Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"97141299b087eff6","local-member-id":"217be714ae9a82b8","added-peer-id":"217be714ae9a82b8","added-peer-peer-urls":["https://192.168.39.73:2380"]}
	{"level":"info","ts":"2024-07-29T14:26:08.355433Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"97141299b087eff6","local-member-id":"217be714ae9a82b8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:26:08.357832Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:26:08.362816Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T14:26:08.36307Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"217be714ae9a82b8","initial-advertise-peer-urls":["https://192.168.39.73:2380"],"listen-peer-urls":["https://192.168.39.73:2380"],"advertise-client-urls":["https://192.168.39.73:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.73:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T14:26:08.363129Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T14:26:08.363215Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.39.73:2380"}
	{"level":"info","ts":"2024-07-29T14:26:08.363251Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.39.73:2380"}
	{"level":"info","ts":"2024-07-29T14:26:09.774454Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T14:26:09.77455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T14:26:09.774581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 received MsgPreVoteResp from 217be714ae9a82b8 at term 2"}
	{"level":"info","ts":"2024-07-29T14:26:09.774599Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T14:26:09.774604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 received MsgVoteResp from 217be714ae9a82b8 at term 3"}
	{"level":"info","ts":"2024-07-29T14:26:09.774624Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T14:26:09.774635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 217be714ae9a82b8 elected leader 217be714ae9a82b8 at term 3"}
	{"level":"info","ts":"2024-07-29T14:26:09.781202Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T14:26:09.781145Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"217be714ae9a82b8","local-member-attributes":"{Name:kubernetes-upgrade-050658 ClientURLs:[https://192.168.39.73:2379]}","request-path":"/0/members/217be714ae9a82b8/attributes","cluster-id":"97141299b087eff6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T14:26:09.782195Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T14:26:09.78257Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T14:26:09.782709Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T14:26:09.78283Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T14:26:09.783599Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T14:26:09.783944Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T14:26:09.784915Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.73:2379"}
	
	
	==> kernel <==
	 14:26:15 up 2 min,  0 users,  load average: 0.95, 0.41, 0.15
	Linux kubernetes-upgrade-050658 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [836c1ee6d54a7520e640ce078d61cf908cb731bd4f3493cc2c4659c6ff3643d1] <==
	I0729 14:26:11.180697       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0729 14:26:11.180707       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0729 14:26:11.217686       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 14:26:11.217941       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 14:26:11.219223       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 14:26:11.220331       1 aggregator.go:171] initial CRD sync complete...
	I0729 14:26:11.220419       1 autoregister_controller.go:144] Starting autoregister controller
	I0729 14:26:11.220488       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 14:26:11.279393       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 14:26:11.291526       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 14:26:11.291565       1 policy_source.go:224] refreshing policies
	I0729 14:26:11.301066       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 14:26:11.318512       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 14:26:11.318641       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0729 14:26:11.318674       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0729 14:26:11.321108       1 cache.go:39] Caches are synced for autoregister controller
	I0729 14:26:11.321804       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0729 14:26:11.328158       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 14:26:11.329355       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0729 14:26:12.127798       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 14:26:12.721935       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 14:26:12.735855       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 14:26:12.780574       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 14:26:12.898473       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 14:26:12.907460       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [9c3a6264fd3d1f8e3a862ea9575a84f5cfe475df34d47d83883d4b9861f5f93f] <==
	I0729 14:25:44.487365       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 14:25:45.070101       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 14:25:45.098560       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 14:25:45.127272       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 14:25:45.127931       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 14:25:45.128540       1 instance.go:231] Using reconciler: lease
	W0729 14:25:45.228076       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:42722->127.0.0.1:2379: read: connection reset by peer"
	W0729 14:25:45.228176       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:42702->127.0.0.1:2379: read: connection reset by peer"
	W0729 14:25:45.228242       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:42714->127.0.0.1:2379: read: connection reset by peer"
	W0729 14:25:46.228590       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:25:46.228891       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:25:46.228927       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:25:47.571787       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:25:47.650328       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:25:47.923851       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:25:49.704439       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:25:50.606634       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:25:50.967123       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:25:53.220373       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:25:54.340257       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:25:55.380575       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:26:00.643263       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:26:01.901287       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:26:02.336597       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0729 14:26:05.131404       1 instance.go:224] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [8ddac43d407a206083c3e393a14a719c6e769d9fd07d771f0f66494019bfb475] <==
	
	
	==> kube-controller-manager [cba2b3c55dc38e729b8d7a35e2ae2f3d4968490fb338bf858876d15a1d37d59e] <==
	I0729 14:26:15.615595       1 shared_informer.go:320] Caches are synced for node
	I0729 14:26:15.615642       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0729 14:26:15.615975       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0729 14:26:15.615984       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0729 14:26:15.615990       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0729 14:26:15.616153       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-050658"
	I0729 14:26:15.648017       1 shared_informer.go:320] Caches are synced for PVC protection
	I0729 14:26:15.648321       1 shared_informer.go:320] Caches are synced for stateful set
	I0729 14:26:15.648419       1 shared_informer.go:320] Caches are synced for expand
	I0729 14:26:15.648558       1 shared_informer.go:320] Caches are synced for ephemeral
	I0729 14:26:15.664999       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 14:26:15.665063       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 14:26:15.687109       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 14:26:15.687796       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0729 14:26:15.687844       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-050658"
	I0729 14:26:15.694402       1 shared_informer.go:320] Caches are synced for endpoint
	I0729 14:26:15.694533       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0729 14:26:15.695052       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0729 14:26:15.718989       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 14:26:15.721248       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 14:26:15.749346       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="62.046138ms"
	I0729 14:26:15.749602       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="113.856µs"
	I0729 14:26:15.751219       1 shared_informer.go:320] Caches are synced for disruption
	I0729 14:26:15.822690       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="54.526846ms"
	I0729 14:26:15.822819       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="48.06µs"
	
	
	==> kube-proxy [33a26f38c3391a73ee0e34d3e4f0538d6035ba8e0577be04295cf681676c11dd] <==
	E0729 14:25:45.594873       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0729 14:25:55.597701       1 server.go:671] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-050658\": net/http: TLS handshake timeout"
	E0729 14:26:06.138187       1 server.go:671] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-050658\": dial tcp 192.168.39.73:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.73:33762->192.168.39.73:8443: read: connection reset by peer"
	E0729 14:26:11.209191       1 server.go:671] "Failed to retrieve node info" err="nodes \"kubernetes-upgrade-050658\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot get resource \"nodes\" in API group \"\" at the cluster scope"
	I0729 14:26:15.632319       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.39.73"]
	E0729 14:26:15.632532       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0729 14:26:15.754273       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0729 14:26:15.754383       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 14:26:15.754415       1 server_linux.go:170] "Using iptables Proxier"
	I0729 14:26:15.762863       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0729 14:26:15.766462       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0729 14:26:15.766567       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 14:26:15.791181       1 config.go:197] "Starting service config controller"
	I0729 14:26:15.791201       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 14:26:15.791223       1 config.go:104] "Starting endpoint slice config controller"
	I0729 14:26:15.791227       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 14:26:15.824140       1 config.go:326] "Starting node config controller"
	I0729 14:26:15.824157       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 14:26:15.892290       1 shared_informer.go:320] Caches are synced for service config
	I0729 14:26:15.892211       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 14:26:15.924516       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [7751923b3672228b20fe087a7c4af67cb235b08c990075e685c5cdeede70361b] <==
	
	
	==> kube-scheduler [9f05f15fbf3868951cedfcef501ff68efde2a560836b8ebb18f48a8fb7211331] <==
	I0729 14:26:08.667607       1 serving.go:386] Generated self-signed cert in-memory
	W0729 14:26:11.187246       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 14:26:11.187295       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 14:26:11.187305       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 14:26:11.187315       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 14:26:11.216153       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0729 14:26:11.216198       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 14:26:11.233058       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 14:26:11.233252       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 14:26:11.233287       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 14:26:11.233310       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0729 14:26:11.333402       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [a3a21dc2c9549e3c72cca3dfef962a5758c0b26521b03182125d46169ad815f3] <==
	I0729 14:25:45.595375       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Jul 29 14:26:07 kubernetes-upgrade-050658 kubelet[4414]: I0729 14:26:07.255788    4414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e93513898be23a065464b7bfb5903f99-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-050658\" (UID: \"e93513898be23a065464b7bfb5903f99\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-050658"
	Jul 29 14:26:07 kubernetes-upgrade-050658 kubelet[4414]: I0729 14:26:07.255805    4414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e93513898be23a065464b7bfb5903f99-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-050658\" (UID: \"e93513898be23a065464b7bfb5903f99\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-050658"
	Jul 29 14:26:07 kubernetes-upgrade-050658 kubelet[4414]: I0729 14:26:07.255820    4414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/970ffa0c455ee6d1ef5a18b59f0fc61e-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-050658\" (UID: \"970ffa0c455ee6d1ef5a18b59f0fc61e\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-050658"
	Jul 29 14:26:07 kubernetes-upgrade-050658 kubelet[4414]: I0729 14:26:07.255836    4414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50bcb1ab133bedfb70d6711c351c509e-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-050658\" (UID: \"50bcb1ab133bedfb70d6711c351c509e\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-050658"
	Jul 29 14:26:07 kubernetes-upgrade-050658 kubelet[4414]: I0729 14:26:07.255855    4414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50bcb1ab133bedfb70d6711c351c509e-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-050658\" (UID: \"50bcb1ab133bedfb70d6711c351c509e\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-050658"
	Jul 29 14:26:07 kubernetes-upgrade-050658 kubelet[4414]: I0729 14:26:07.448638    4414 scope.go:117] "RemoveContainer" containerID="b901340feaeb2a94129590cb225a146d43cad0c4f0b1c5fdab264d63abc9700d"
	Jul 29 14:26:07 kubernetes-upgrade-050658 kubelet[4414]: I0729 14:26:07.451203    4414 scope.go:117] "RemoveContainer" containerID="8ddac43d407a206083c3e393a14a719c6e769d9fd07d771f0f66494019bfb475"
	Jul 29 14:26:07 kubernetes-upgrade-050658 kubelet[4414]: I0729 14:26:07.453890    4414 scope.go:117] "RemoveContainer" containerID="9c3a6264fd3d1f8e3a862ea9575a84f5cfe475df34d47d83883d4b9861f5f93f"
	Jul 29 14:26:07 kubernetes-upgrade-050658 kubelet[4414]: I0729 14:26:07.459342    4414 scope.go:117] "RemoveContainer" containerID="a3a21dc2c9549e3c72cca3dfef962a5758c0b26521b03182125d46169ad815f3"
	Jul 29 14:26:07 kubernetes-upgrade-050658 kubelet[4414]: E0729 14:26:07.546993    4414 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-050658?timeout=10s\": dial tcp 192.168.39.73:8443: connect: connection refused" interval="800ms"
	Jul 29 14:26:07 kubernetes-upgrade-050658 kubelet[4414]: I0729 14:26:07.655290    4414 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-050658"
	Jul 29 14:26:07 kubernetes-upgrade-050658 kubelet[4414]: E0729 14:26:07.656400    4414 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.73:8443: connect: connection refused" node="kubernetes-upgrade-050658"
	Jul 29 14:26:07 kubernetes-upgrade-050658 kubelet[4414]: W0729 14:26:07.971646    4414 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-050658&limit=500&resourceVersion=0": dial tcp 192.168.39.73:8443: connect: connection refused
	Jul 29 14:26:07 kubernetes-upgrade-050658 kubelet[4414]: E0729 14:26:07.971818    4414 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-050658&limit=500&resourceVersion=0\": dial tcp 192.168.39.73:8443: connect: connection refused" logger="UnhandledError"
	Jul 29 14:26:08 kubernetes-upgrade-050658 kubelet[4414]: I0729 14:26:08.457835    4414 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-050658"
	Jul 29 14:26:11 kubernetes-upgrade-050658 kubelet[4414]: I0729 14:26:11.369582    4414 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-050658"
	Jul 29 14:26:11 kubernetes-upgrade-050658 kubelet[4414]: I0729 14:26:11.369679    4414 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-050658"
	Jul 29 14:26:11 kubernetes-upgrade-050658 kubelet[4414]: I0729 14:26:11.369704    4414 kuberuntime_manager.go:1524] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 14:26:11 kubernetes-upgrade-050658 kubelet[4414]: I0729 14:26:11.370667    4414 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 14:26:11 kubernetes-upgrade-050658 kubelet[4414]: I0729 14:26:11.910211    4414 apiserver.go:52] "Watching apiserver"
	Jul 29 14:26:11 kubernetes-upgrade-050658 kubelet[4414]: I0729 14:26:11.939170    4414 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jul 29 14:26:11 kubernetes-upgrade-050658 kubelet[4414]: I0729 14:26:11.988997    4414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85996dda-66e6-47fa-86ca-2d78b4316af4-lib-modules\") pod \"kube-proxy-mw5bm\" (UID: \"85996dda-66e6-47fa-86ca-2d78b4316af4\") " pod="kube-system/kube-proxy-mw5bm"
	Jul 29 14:26:11 kubernetes-upgrade-050658 kubelet[4414]: I0729 14:26:11.989134    4414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/053f9f9e-0322-4e1b-b1d3-560c6baa7479-tmp\") pod \"storage-provisioner\" (UID: \"053f9f9e-0322-4e1b-b1d3-560c6baa7479\") " pod="kube-system/storage-provisioner"
	Jul 29 14:26:11 kubernetes-upgrade-050658 kubelet[4414]: I0729 14:26:11.989329    4414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85996dda-66e6-47fa-86ca-2d78b4316af4-xtables-lock\") pod \"kube-proxy-mw5bm\" (UID: \"85996dda-66e6-47fa-86ca-2d78b4316af4\") " pod="kube-system/kube-proxy-mw5bm"
	Jul 29 14:26:12 kubernetes-upgrade-050658 kubelet[4414]: I0729 14:26:12.222162    4414 scope.go:117] "RemoveContainer" containerID="1cd0fe0ac15ddbac71aa78a2c733c91fe63a317c22e6ccfebbd94041328b519a"
	
	
	==> storage-provisioner [1cd0fe0ac15ddbac71aa78a2c733c91fe63a317c22e6ccfebbd94041328b519a] <==
	I0729 14:25:44.685841       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0729 14:25:55.032649       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": net/http: TLS handshake timeout
	
	
	==> storage-provisioner [9a45f80fdd7b936cac6b6b8ddb87b0543e78d0d9bff63438abbfb5306f96c4c9] <==
	I0729 14:26:12.362428       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 14:26:12.372447       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 14:26:12.372556       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-050658 -n kubernetes-upgrade-050658
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-050658 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-050658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-050658
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-050658: (1.014066312s)
--- FAIL: TestKubernetesUpgrade (464.32s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (53.1s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-414966 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-414966 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.142505472s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-414966] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19338
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-414966" primary control-plane node in "pause-414966" cluster
	* Updating the running kvm2 "pause-414966" VM ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-414966" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 14:23:16.462190 1023077 out.go:291] Setting OutFile to fd 1 ...
	I0729 14:23:16.462341 1023077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:23:16.462354 1023077 out.go:304] Setting ErrFile to fd 2...
	I0729 14:23:16.462361 1023077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:23:16.462571 1023077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 14:23:16.463103 1023077 out.go:298] Setting JSON to false
	I0729 14:23:16.464089 1023077 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":14748,"bootTime":1722248248,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 14:23:16.464149 1023077 start.go:139] virtualization: kvm guest
	I0729 14:23:16.466240 1023077 out.go:177] * [pause-414966] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 14:23:16.467633 1023077 notify.go:220] Checking for updates...
	I0729 14:23:16.467639 1023077 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 14:23:16.469298 1023077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 14:23:16.470674 1023077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:23:16.472267 1023077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:23:16.473864 1023077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 14:23:16.475364 1023077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 14:23:16.477037 1023077 config.go:182] Loaded profile config "pause-414966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:23:16.477482 1023077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:23:16.477538 1023077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:23:16.494217 1023077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33477
	I0729 14:23:16.494761 1023077 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:23:16.495322 1023077 main.go:141] libmachine: Using API Version  1
	I0729 14:23:16.495345 1023077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:23:16.495777 1023077 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:23:16.495971 1023077 main.go:141] libmachine: (pause-414966) Calling .DriverName
	I0729 14:23:16.496249 1023077 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 14:23:16.496580 1023077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:23:16.496634 1023077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:23:16.512525 1023077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40719
	I0729 14:23:16.512985 1023077 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:23:16.513549 1023077 main.go:141] libmachine: Using API Version  1
	I0729 14:23:16.513569 1023077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:23:16.513947 1023077 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:23:16.514146 1023077 main.go:141] libmachine: (pause-414966) Calling .DriverName
	I0729 14:23:16.557496 1023077 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 14:23:16.559107 1023077 start.go:297] selected driver: kvm2
	I0729 14:23:16.559128 1023077 start.go:901] validating driver "kvm2" against &{Name:pause-414966 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.3 ClusterName:pause-414966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:23:16.559308 1023077 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 14:23:16.559674 1023077 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:23:16.559756 1023077 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19338-974764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 14:23:16.575936 1023077 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 14:23:16.576683 1023077 cni.go:84] Creating CNI manager for ""
	I0729 14:23:16.576701 1023077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:23:16.576794 1023077 start.go:340] cluster config:
	{Name:pause-414966 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-414966 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:23:16.576955 1023077 iso.go:125] acquiring lock: {Name:mk2bc72146110e230952d77b90cad2ea8182c9d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:23:16.578716 1023077 out.go:177] * Starting "pause-414966" primary control-plane node in "pause-414966" cluster
	I0729 14:23:16.579936 1023077 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 14:23:16.579988 1023077 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 14:23:16.580001 1023077 cache.go:56] Caching tarball of preloaded images
	I0729 14:23:16.580140 1023077 preload.go:172] Found /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 14:23:16.580156 1023077 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 14:23:16.580334 1023077 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/pause-414966/config.json ...
	I0729 14:23:16.580627 1023077 start.go:360] acquireMachinesLock for pause-414966: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 14:23:16.580711 1023077 start.go:364] duration metric: took 53.601µs to acquireMachinesLock for "pause-414966"
	I0729 14:23:16.580735 1023077 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:23:16.580742 1023077 fix.go:54] fixHost starting: 
	I0729 14:23:16.581108 1023077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:23:16.581159 1023077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:23:16.597244 1023077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44757
	I0729 14:23:16.597770 1023077 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:23:16.598286 1023077 main.go:141] libmachine: Using API Version  1
	I0729 14:23:16.598309 1023077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:23:16.598646 1023077 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:23:16.598889 1023077 main.go:141] libmachine: (pause-414966) Calling .DriverName
	I0729 14:23:16.599020 1023077 main.go:141] libmachine: (pause-414966) Calling .GetState
	I0729 14:23:16.600870 1023077 fix.go:112] recreateIfNeeded on pause-414966: state=Running err=<nil>
	W0729 14:23:16.600895 1023077 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:23:16.602647 1023077 out.go:177] * Updating the running kvm2 "pause-414966" VM ...
	I0729 14:23:16.603867 1023077 machine.go:94] provisionDockerMachine start ...
	I0729 14:23:16.603894 1023077 main.go:141] libmachine: (pause-414966) Calling .DriverName
	I0729 14:23:16.604112 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHHostname
	I0729 14:23:16.607218 1023077 main.go:141] libmachine: (pause-414966) DBG | domain pause-414966 has defined MAC address 52:54:00:a8:17:c4 in network mk-pause-414966
	I0729 14:23:16.607688 1023077 main.go:141] libmachine: (pause-414966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:17:c4", ip: ""} in network mk-pause-414966: {Iface:virbr2 ExpiryTime:2024-07-29 15:21:53 +0000 UTC Type:0 Mac:52:54:00:a8:17:c4 Iaid: IPaddr:192.168.50.133 Prefix:24 Hostname:pause-414966 Clientid:01:52:54:00:a8:17:c4}
	I0729 14:23:16.607716 1023077 main.go:141] libmachine: (pause-414966) DBG | domain pause-414966 has defined IP address 192.168.50.133 and MAC address 52:54:00:a8:17:c4 in network mk-pause-414966
	I0729 14:23:16.607904 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHPort
	I0729 14:23:16.608086 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHKeyPath
	I0729 14:23:16.608224 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHKeyPath
	I0729 14:23:16.608370 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHUsername
	I0729 14:23:16.608562 1023077 main.go:141] libmachine: Using SSH client type: native
	I0729 14:23:16.608755 1023077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.133 22 <nil> <nil>}
	I0729 14:23:16.608768 1023077 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:23:16.721978 1023077 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-414966
	
	I0729 14:23:16.722009 1023077 main.go:141] libmachine: (pause-414966) Calling .GetMachineName
	I0729 14:23:16.722276 1023077 buildroot.go:166] provisioning hostname "pause-414966"
	I0729 14:23:16.722310 1023077 main.go:141] libmachine: (pause-414966) Calling .GetMachineName
	I0729 14:23:16.722517 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHHostname
	I0729 14:23:16.725613 1023077 main.go:141] libmachine: (pause-414966) DBG | domain pause-414966 has defined MAC address 52:54:00:a8:17:c4 in network mk-pause-414966
	I0729 14:23:16.726083 1023077 main.go:141] libmachine: (pause-414966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:17:c4", ip: ""} in network mk-pause-414966: {Iface:virbr2 ExpiryTime:2024-07-29 15:21:53 +0000 UTC Type:0 Mac:52:54:00:a8:17:c4 Iaid: IPaddr:192.168.50.133 Prefix:24 Hostname:pause-414966 Clientid:01:52:54:00:a8:17:c4}
	I0729 14:23:16.726113 1023077 main.go:141] libmachine: (pause-414966) DBG | domain pause-414966 has defined IP address 192.168.50.133 and MAC address 52:54:00:a8:17:c4 in network mk-pause-414966
	I0729 14:23:16.726357 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHPort
	I0729 14:23:16.726577 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHKeyPath
	I0729 14:23:16.726770 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHKeyPath
	I0729 14:23:16.726916 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHUsername
	I0729 14:23:16.727095 1023077 main.go:141] libmachine: Using SSH client type: native
	I0729 14:23:16.727326 1023077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.133 22 <nil> <nil>}
	I0729 14:23:16.727348 1023077 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-414966 && echo "pause-414966" | sudo tee /etc/hostname
	I0729 14:23:16.852083 1023077 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-414966
	
	I0729 14:23:16.852109 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHHostname
	I0729 14:23:16.855194 1023077 main.go:141] libmachine: (pause-414966) DBG | domain pause-414966 has defined MAC address 52:54:00:a8:17:c4 in network mk-pause-414966
	I0729 14:23:16.855576 1023077 main.go:141] libmachine: (pause-414966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:17:c4", ip: ""} in network mk-pause-414966: {Iface:virbr2 ExpiryTime:2024-07-29 15:21:53 +0000 UTC Type:0 Mac:52:54:00:a8:17:c4 Iaid: IPaddr:192.168.50.133 Prefix:24 Hostname:pause-414966 Clientid:01:52:54:00:a8:17:c4}
	I0729 14:23:16.855625 1023077 main.go:141] libmachine: (pause-414966) DBG | domain pause-414966 has defined IP address 192.168.50.133 and MAC address 52:54:00:a8:17:c4 in network mk-pause-414966
	I0729 14:23:16.855809 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHPort
	I0729 14:23:16.855974 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHKeyPath
	I0729 14:23:16.856159 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHKeyPath
	I0729 14:23:16.856290 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHUsername
	I0729 14:23:16.856458 1023077 main.go:141] libmachine: Using SSH client type: native
	I0729 14:23:16.856676 1023077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.133 22 <nil> <nil>}
	I0729 14:23:16.856698 1023077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-414966' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-414966/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-414966' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:23:16.973644 1023077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:23:16.973679 1023077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:23:16.973732 1023077 buildroot.go:174] setting up certificates
	I0729 14:23:16.973741 1023077 provision.go:84] configureAuth start
	I0729 14:23:16.973754 1023077 main.go:141] libmachine: (pause-414966) Calling .GetMachineName
	I0729 14:23:16.974076 1023077 main.go:141] libmachine: (pause-414966) Calling .GetIP
	I0729 14:23:16.977208 1023077 main.go:141] libmachine: (pause-414966) DBG | domain pause-414966 has defined MAC address 52:54:00:a8:17:c4 in network mk-pause-414966
	I0729 14:23:16.977635 1023077 main.go:141] libmachine: (pause-414966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:17:c4", ip: ""} in network mk-pause-414966: {Iface:virbr2 ExpiryTime:2024-07-29 15:21:53 +0000 UTC Type:0 Mac:52:54:00:a8:17:c4 Iaid: IPaddr:192.168.50.133 Prefix:24 Hostname:pause-414966 Clientid:01:52:54:00:a8:17:c4}
	I0729 14:23:16.977673 1023077 main.go:141] libmachine: (pause-414966) DBG | domain pause-414966 has defined IP address 192.168.50.133 and MAC address 52:54:00:a8:17:c4 in network mk-pause-414966
	I0729 14:23:16.977748 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHHostname
	I0729 14:23:16.980193 1023077 main.go:141] libmachine: (pause-414966) DBG | domain pause-414966 has defined MAC address 52:54:00:a8:17:c4 in network mk-pause-414966
	I0729 14:23:16.980571 1023077 main.go:141] libmachine: (pause-414966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:17:c4", ip: ""} in network mk-pause-414966: {Iface:virbr2 ExpiryTime:2024-07-29 15:21:53 +0000 UTC Type:0 Mac:52:54:00:a8:17:c4 Iaid: IPaddr:192.168.50.133 Prefix:24 Hostname:pause-414966 Clientid:01:52:54:00:a8:17:c4}
	I0729 14:23:16.980617 1023077 main.go:141] libmachine: (pause-414966) DBG | domain pause-414966 has defined IP address 192.168.50.133 and MAC address 52:54:00:a8:17:c4 in network mk-pause-414966
	I0729 14:23:16.980773 1023077 provision.go:143] copyHostCerts
	I0729 14:23:16.980846 1023077 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:23:16.980864 1023077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:23:16.980930 1023077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:23:16.981063 1023077 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:23:16.981074 1023077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:23:16.981106 1023077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:23:16.981200 1023077 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:23:16.981209 1023077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:23:16.981236 1023077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:23:16.981318 1023077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.pause-414966 san=[127.0.0.1 192.168.50.133 localhost minikube pause-414966]
	I0729 14:23:17.111602 1023077 provision.go:177] copyRemoteCerts
	I0729 14:23:17.111697 1023077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:23:17.111728 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHHostname
	I0729 14:23:17.114746 1023077 main.go:141] libmachine: (pause-414966) DBG | domain pause-414966 has defined MAC address 52:54:00:a8:17:c4 in network mk-pause-414966
	I0729 14:23:17.115108 1023077 main.go:141] libmachine: (pause-414966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:17:c4", ip: ""} in network mk-pause-414966: {Iface:virbr2 ExpiryTime:2024-07-29 15:21:53 +0000 UTC Type:0 Mac:52:54:00:a8:17:c4 Iaid: IPaddr:192.168.50.133 Prefix:24 Hostname:pause-414966 Clientid:01:52:54:00:a8:17:c4}
	I0729 14:23:17.115136 1023077 main.go:141] libmachine: (pause-414966) DBG | domain pause-414966 has defined IP address 192.168.50.133 and MAC address 52:54:00:a8:17:c4 in network mk-pause-414966
	I0729 14:23:17.115368 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHPort
	I0729 14:23:17.115574 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHKeyPath
	I0729 14:23:17.115743 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHUsername
	I0729 14:23:17.115890 1023077 sshutil.go:53] new ssh client: &{IP:192.168.50.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/pause-414966/id_rsa Username:docker}
	I0729 14:23:17.204748 1023077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:23:17.237839 1023077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 14:23:17.265538 1023077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:23:17.301184 1023077 provision.go:87] duration metric: took 327.419758ms to configureAuth
	I0729 14:23:17.301214 1023077 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:23:17.301428 1023077 config.go:182] Loaded profile config "pause-414966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:23:17.301506 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHHostname
	I0729 14:23:17.304222 1023077 main.go:141] libmachine: (pause-414966) DBG | domain pause-414966 has defined MAC address 52:54:00:a8:17:c4 in network mk-pause-414966
	I0729 14:23:17.304568 1023077 main.go:141] libmachine: (pause-414966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:17:c4", ip: ""} in network mk-pause-414966: {Iface:virbr2 ExpiryTime:2024-07-29 15:21:53 +0000 UTC Type:0 Mac:52:54:00:a8:17:c4 Iaid: IPaddr:192.168.50.133 Prefix:24 Hostname:pause-414966 Clientid:01:52:54:00:a8:17:c4}
	I0729 14:23:17.304612 1023077 main.go:141] libmachine: (pause-414966) DBG | domain pause-414966 has defined IP address 192.168.50.133 and MAC address 52:54:00:a8:17:c4 in network mk-pause-414966
	I0729 14:23:17.304793 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHPort
	I0729 14:23:17.304982 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHKeyPath
	I0729 14:23:17.305126 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHKeyPath
	I0729 14:23:17.305285 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHUsername
	I0729 14:23:17.305496 1023077 main.go:141] libmachine: Using SSH client type: native
	I0729 14:23:17.305708 1023077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.133 22 <nil> <nil>}
	I0729 14:23:17.305725 1023077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:23:24.810985 1023077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:23:24.811015 1023077 machine.go:97] duration metric: took 8.207129069s to provisionDockerMachine
	I0729 14:23:24.811030 1023077 start.go:293] postStartSetup for "pause-414966" (driver="kvm2")
	I0729 14:23:24.811043 1023077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:23:24.811067 1023077 main.go:141] libmachine: (pause-414966) Calling .DriverName
	I0729 14:23:24.811507 1023077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:23:24.811536 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHHostname
	I0729 14:23:24.814463 1023077 main.go:141] libmachine: (pause-414966) DBG | domain pause-414966 has defined MAC address 52:54:00:a8:17:c4 in network mk-pause-414966
	I0729 14:23:24.814909 1023077 main.go:141] libmachine: (pause-414966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:17:c4", ip: ""} in network mk-pause-414966: {Iface:virbr2 ExpiryTime:2024-07-29 15:21:53 +0000 UTC Type:0 Mac:52:54:00:a8:17:c4 Iaid: IPaddr:192.168.50.133 Prefix:24 Hostname:pause-414966 Clientid:01:52:54:00:a8:17:c4}
	I0729 14:23:24.814941 1023077 main.go:141] libmachine: (pause-414966) DBG | domain pause-414966 has defined IP address 192.168.50.133 and MAC address 52:54:00:a8:17:c4 in network mk-pause-414966
	I0729 14:23:24.815147 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHPort
	I0729 14:23:24.815366 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHKeyPath
	I0729 14:23:24.815548 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHUsername
	I0729 14:23:24.815704 1023077 sshutil.go:53] new ssh client: &{IP:192.168.50.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/pause-414966/id_rsa Username:docker}
	I0729 14:23:24.902852 1023077 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:23:24.907265 1023077 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:23:24.907291 1023077 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:23:24.907349 1023077 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:23:24.907434 1023077 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:23:24.907526 1023077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:23:24.917854 1023077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:23:24.943606 1023077 start.go:296] duration metric: took 132.559979ms for postStartSetup
	I0729 14:23:24.943658 1023077 fix.go:56] duration metric: took 8.362915496s for fixHost
	I0729 14:23:24.943686 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHHostname
	I0729 14:23:24.946415 1023077 main.go:141] libmachine: (pause-414966) DBG | domain pause-414966 has defined MAC address 52:54:00:a8:17:c4 in network mk-pause-414966
	I0729 14:23:24.946813 1023077 main.go:141] libmachine: (pause-414966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:17:c4", ip: ""} in network mk-pause-414966: {Iface:virbr2 ExpiryTime:2024-07-29 15:21:53 +0000 UTC Type:0 Mac:52:54:00:a8:17:c4 Iaid: IPaddr:192.168.50.133 Prefix:24 Hostname:pause-414966 Clientid:01:52:54:00:a8:17:c4}
	I0729 14:23:24.946863 1023077 main.go:141] libmachine: (pause-414966) DBG | domain pause-414966 has defined IP address 192.168.50.133 and MAC address 52:54:00:a8:17:c4 in network mk-pause-414966
	I0729 14:23:24.946997 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHPort
	I0729 14:23:24.947198 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHKeyPath
	I0729 14:23:24.947357 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHKeyPath
	I0729 14:23:24.947548 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHUsername
	I0729 14:23:24.947770 1023077 main.go:141] libmachine: Using SSH client type: native
	I0729 14:23:24.947999 1023077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.133 22 <nil> <nil>}
	I0729 14:23:24.948013 1023077 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 14:23:25.061798 1023077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263005.051369738
	
	I0729 14:23:25.061841 1023077 fix.go:216] guest clock: 1722263005.051369738
	I0729 14:23:25.061872 1023077 fix.go:229] Guest: 2024-07-29 14:23:25.051369738 +0000 UTC Remote: 2024-07-29 14:23:24.943664422 +0000 UTC m=+8.517615380 (delta=107.705316ms)
	I0729 14:23:25.061906 1023077 fix.go:200] guest clock delta is within tolerance: 107.705316ms
	I0729 14:23:25.061914 1023077 start.go:83] releasing machines lock for "pause-414966", held for 8.481187053s
	I0729 14:23:25.061944 1023077 main.go:141] libmachine: (pause-414966) Calling .DriverName
	I0729 14:23:25.062232 1023077 main.go:141] libmachine: (pause-414966) Calling .GetIP
	I0729 14:23:25.065600 1023077 main.go:141] libmachine: (pause-414966) DBG | domain pause-414966 has defined MAC address 52:54:00:a8:17:c4 in network mk-pause-414966
	I0729 14:23:25.066024 1023077 main.go:141] libmachine: (pause-414966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:17:c4", ip: ""} in network mk-pause-414966: {Iface:virbr2 ExpiryTime:2024-07-29 15:21:53 +0000 UTC Type:0 Mac:52:54:00:a8:17:c4 Iaid: IPaddr:192.168.50.133 Prefix:24 Hostname:pause-414966 Clientid:01:52:54:00:a8:17:c4}
	I0729 14:23:25.066054 1023077 main.go:141] libmachine: (pause-414966) DBG | domain pause-414966 has defined IP address 192.168.50.133 and MAC address 52:54:00:a8:17:c4 in network mk-pause-414966
	I0729 14:23:25.066287 1023077 main.go:141] libmachine: (pause-414966) Calling .DriverName
	I0729 14:23:25.066884 1023077 main.go:141] libmachine: (pause-414966) Calling .DriverName
	I0729 14:23:25.067081 1023077 main.go:141] libmachine: (pause-414966) Calling .DriverName
	I0729 14:23:25.067176 1023077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:23:25.067229 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHHostname
	I0729 14:23:25.067445 1023077 ssh_runner.go:195] Run: cat /version.json
	I0729 14:23:25.067472 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHHostname
	I0729 14:23:25.070406 1023077 main.go:141] libmachine: (pause-414966) DBG | domain pause-414966 has defined MAC address 52:54:00:a8:17:c4 in network mk-pause-414966
	I0729 14:23:25.070735 1023077 main.go:141] libmachine: (pause-414966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:17:c4", ip: ""} in network mk-pause-414966: {Iface:virbr2 ExpiryTime:2024-07-29 15:21:53 +0000 UTC Type:0 Mac:52:54:00:a8:17:c4 Iaid: IPaddr:192.168.50.133 Prefix:24 Hostname:pause-414966 Clientid:01:52:54:00:a8:17:c4}
	I0729 14:23:25.070753 1023077 main.go:141] libmachine: (pause-414966) DBG | domain pause-414966 has defined IP address 192.168.50.133 and MAC address 52:54:00:a8:17:c4 in network mk-pause-414966
	I0729 14:23:25.070898 1023077 main.go:141] libmachine: (pause-414966) DBG | domain pause-414966 has defined MAC address 52:54:00:a8:17:c4 in network mk-pause-414966
	I0729 14:23:25.071033 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHPort
	I0729 14:23:25.071235 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHKeyPath
	I0729 14:23:25.071414 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHUsername
	I0729 14:23:25.071572 1023077 main.go:141] libmachine: (pause-414966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:17:c4", ip: ""} in network mk-pause-414966: {Iface:virbr2 ExpiryTime:2024-07-29 15:21:53 +0000 UTC Type:0 Mac:52:54:00:a8:17:c4 Iaid: IPaddr:192.168.50.133 Prefix:24 Hostname:pause-414966 Clientid:01:52:54:00:a8:17:c4}
	I0729 14:23:25.071596 1023077 main.go:141] libmachine: (pause-414966) DBG | domain pause-414966 has defined IP address 192.168.50.133 and MAC address 52:54:00:a8:17:c4 in network mk-pause-414966
	I0729 14:23:25.071563 1023077 sshutil.go:53] new ssh client: &{IP:192.168.50.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/pause-414966/id_rsa Username:docker}
	I0729 14:23:25.071804 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHPort
	I0729 14:23:25.071977 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHKeyPath
	I0729 14:23:25.072128 1023077 main.go:141] libmachine: (pause-414966) Calling .GetSSHUsername
	I0729 14:23:25.072255 1023077 sshutil.go:53] new ssh client: &{IP:192.168.50.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/pause-414966/id_rsa Username:docker}
	I0729 14:23:25.183963 1023077 ssh_runner.go:195] Run: systemctl --version
	I0729 14:23:25.190017 1023077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:23:25.358336 1023077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:23:25.367485 1023077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:23:25.367591 1023077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:23:25.380670 1023077 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 14:23:25.380699 1023077 start.go:495] detecting cgroup driver to use...
	I0729 14:23:25.380771 1023077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:23:25.404301 1023077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:23:25.423002 1023077 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:23:25.423061 1023077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:23:25.442622 1023077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:23:25.461301 1023077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:23:25.604681 1023077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:23:25.740120 1023077 docker.go:233] disabling docker service ...
	I0729 14:23:25.740204 1023077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:23:25.758229 1023077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:23:25.773026 1023077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:23:25.965182 1023077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:23:26.305673 1023077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:23:26.330637 1023077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:23:26.388661 1023077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 14:23:26.388767 1023077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:23:26.525266 1023077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:23:26.525345 1023077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:23:26.587563 1023077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:23:26.670085 1023077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:23:26.723587 1023077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:23:26.762570 1023077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:23:26.810501 1023077 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:23:26.827193 1023077 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:23:26.849491 1023077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:23:26.866827 1023077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:23:26.882589 1023077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:23:27.157262 1023077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:23:27.677535 1023077 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:23:27.677630 1023077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:23:27.683021 1023077 start.go:563] Will wait 60s for crictl version
	I0729 14:23:27.683090 1023077 ssh_runner.go:195] Run: which crictl
	I0729 14:23:27.687148 1023077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:23:27.726954 1023077 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:23:27.727047 1023077 ssh_runner.go:195] Run: crio --version
	I0729 14:23:27.755596 1023077 ssh_runner.go:195] Run: crio --version
	I0729 14:23:27.787977 1023077 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 14:23:27.789349 1023077 main.go:141] libmachine: (pause-414966) Calling .GetIP
	I0729 14:23:27.791886 1023077 main.go:141] libmachine: (pause-414966) DBG | domain pause-414966 has defined MAC address 52:54:00:a8:17:c4 in network mk-pause-414966
	I0729 14:23:27.792161 1023077 main.go:141] libmachine: (pause-414966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:17:c4", ip: ""} in network mk-pause-414966: {Iface:virbr2 ExpiryTime:2024-07-29 15:21:53 +0000 UTC Type:0 Mac:52:54:00:a8:17:c4 Iaid: IPaddr:192.168.50.133 Prefix:24 Hostname:pause-414966 Clientid:01:52:54:00:a8:17:c4}
	I0729 14:23:27.792191 1023077 main.go:141] libmachine: (pause-414966) DBG | domain pause-414966 has defined IP address 192.168.50.133 and MAC address 52:54:00:a8:17:c4 in network mk-pause-414966
	I0729 14:23:27.792428 1023077 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 14:23:27.797279 1023077 kubeadm.go:883] updating cluster {Name:pause-414966 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-414966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:23:27.797456 1023077 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 14:23:27.797517 1023077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:23:27.843001 1023077 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 14:23:27.843024 1023077 crio.go:433] Images already preloaded, skipping extraction
	I0729 14:23:27.843075 1023077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:23:27.878893 1023077 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 14:23:27.878919 1023077 cache_images.go:84] Images are preloaded, skipping loading
	I0729 14:23:27.878927 1023077 kubeadm.go:934] updating node { 192.168.50.133 8443 v1.30.3 crio true true} ...
	I0729 14:23:27.879041 1023077 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-414966 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.133
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-414966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:23:27.879116 1023077 ssh_runner.go:195] Run: crio config
	I0729 14:23:27.931336 1023077 cni.go:84] Creating CNI manager for ""
	I0729 14:23:27.931359 1023077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:23:27.931368 1023077 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:23:27.931389 1023077 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.133 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-414966 NodeName:pause-414966 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.133"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.133 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 14:23:27.931532 1023077 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.133
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-414966"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.133
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.133"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:23:27.931599 1023077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 14:23:27.942713 1023077 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:23:27.942797 1023077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:23:27.952938 1023077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0729 14:23:27.972328 1023077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 14:23:28.001567 1023077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0729 14:23:28.078775 1023077 ssh_runner.go:195] Run: grep 192.168.50.133	control-plane.minikube.internal$ /etc/hosts
	I0729 14:23:28.089759 1023077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:23:28.326122 1023077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:23:28.399465 1023077 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/pause-414966 for IP: 192.168.50.133
	I0729 14:23:28.399494 1023077 certs.go:194] generating shared ca certs ...
	I0729 14:23:28.399515 1023077 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:23:28.399702 1023077 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:23:28.399744 1023077 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:23:28.399754 1023077 certs.go:256] generating profile certs ...
	I0729 14:23:28.399870 1023077 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/pause-414966/client.key
	I0729 14:23:28.399944 1023077 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/pause-414966/apiserver.key.e3b859b5
	I0729 14:23:28.399996 1023077 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/pause-414966/proxy-client.key
	I0729 14:23:28.400134 1023077 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:23:28.400172 1023077 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:23:28.400183 1023077 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:23:28.400217 1023077 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:23:28.400248 1023077 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:23:28.400280 1023077 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:23:28.400332 1023077 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:23:28.400981 1023077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:23:28.528586 1023077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:23:28.617869 1023077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:23:28.672959 1023077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:23:28.716685 1023077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/pause-414966/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 14:23:28.743717 1023077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/pause-414966/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 14:23:28.783183 1023077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/pause-414966/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:23:28.816583 1023077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/pause-414966/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 14:23:28.851088 1023077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:23:28.881392 1023077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:23:28.921357 1023077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:23:28.958211 1023077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:23:28.998607 1023077 ssh_runner.go:195] Run: openssl version
	I0729 14:23:29.017867 1023077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:23:29.031766 1023077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:23:29.039707 1023077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:23:29.039794 1023077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:23:29.048156 1023077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:23:29.064612 1023077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:23:29.081416 1023077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:23:29.087802 1023077 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:23:29.087878 1023077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:23:29.096065 1023077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:23:29.107346 1023077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:23:29.121762 1023077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:23:29.126956 1023077 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:23:29.127031 1023077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:23:29.133505 1023077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:23:29.144032 1023077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:23:29.149330 1023077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:23:29.156197 1023077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:23:29.162596 1023077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:23:29.168982 1023077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:23:29.175206 1023077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:23:29.183034 1023077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:23:29.189584 1023077 kubeadm.go:392] StartCluster: {Name:pause-414966 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-414966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:23:29.189747 1023077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:23:29.189802 1023077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:23:29.279018 1023077 cri.go:89] found id: "035d53d316c3b2862b6d13703802b4be52b5d9820739c3b42bbaf79c8ba91f16"
	I0729 14:23:29.279049 1023077 cri.go:89] found id: "8c1298e8b84ad3b0261c3dfa53b6a2d6f814335046ead4ebebfdb2d14fba959c"
	I0729 14:23:29.279055 1023077 cri.go:89] found id: "8135771c948561a1046c0e9adac1481fe77937def69d866e233bfb24215094ee"
	I0729 14:23:29.279059 1023077 cri.go:89] found id: "692a1cc699d0a690cd53786a86df00cfd6bcff1dcfd05e18a9f530ef5c585ce8"
	I0729 14:23:29.279064 1023077 cri.go:89] found id: "f29c0422b10cd894aa147915e7eb22a41e8be365adb664295b301163d79cd6fd"
	I0729 14:23:29.279068 1023077 cri.go:89] found id: "1b4363a555ed2549ec21bd47edcd8dfe6d69560aa54510cdca5e9f15a1472466"
	I0729 14:23:29.279073 1023077 cri.go:89] found id: "594d17490a04d5ffe49bbf6c7987154d651e6110bbeea4dacc652c85edf3d360"
	I0729 14:23:29.279077 1023077 cri.go:89] found id: ""
	I0729 14:23:29.279133 1023077 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-414966 -n pause-414966
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-414966 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-414966 logs -n 25: (1.546049143s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-050658       | kubernetes-upgrade-050658 | jenkins | v1.33.1 | 29 Jul 24 14:18 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-764732        | force-systemd-env-764732  | jenkins | v1.33.1 | 29 Jul 24 14:19 UTC | 29 Jul 24 14:19 UTC |
	| start   | -p stopped-upgrade-626874          | minikube                  | jenkins | v1.26.0 | 29 Jul 24 14:19 UTC | 29 Jul 24 14:20 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p offline-crio-715623             | offline-crio-715623       | jenkins | v1.33.1 | 29 Jul 24 14:19 UTC | 29 Jul 24 14:19 UTC |
	| start   | -p running-upgrade-932740          | minikube                  | jenkins | v1.26.0 | 29 Jul 24 14:19 UTC | 29 Jul 24 14:21 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-721916             | NoKubernetes-721916       | jenkins | v1.33.1 | 29 Jul 24 14:19 UTC | 29 Jul 24 14:20 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-626874 stop        | minikube                  | jenkins | v1.26.0 | 29 Jul 24 14:20 UTC | 29 Jul 24 14:20 UTC |
	| start   | -p stopped-upgrade-626874          | stopped-upgrade-626874    | jenkins | v1.33.1 | 29 Jul 24 14:20 UTC | 29 Jul 24 14:21 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-721916             | NoKubernetes-721916       | jenkins | v1.33.1 | 29 Jul 24 14:20 UTC | 29 Jul 24 14:20 UTC |
	| start   | -p NoKubernetes-721916             | NoKubernetes-721916       | jenkins | v1.33.1 | 29 Jul 24 14:20 UTC | 29 Jul 24 14:21 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p running-upgrade-932740          | running-upgrade-932740    | jenkins | v1.33.1 | 29 Jul 24 14:21 UTC | 29 Jul 24 14:22 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-721916 sudo        | NoKubernetes-721916       | jenkins | v1.33.1 | 29 Jul 24 14:21 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-626874          | stopped-upgrade-626874    | jenkins | v1.33.1 | 29 Jul 24 14:21 UTC | 29 Jul 24 14:21 UTC |
	| start   | -p pause-414966 --memory=2048      | pause-414966              | jenkins | v1.33.1 | 29 Jul 24 14:21 UTC | 29 Jul 24 14:23 UTC |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-721916             | NoKubernetes-721916       | jenkins | v1.33.1 | 29 Jul 24 14:21 UTC | 29 Jul 24 14:21 UTC |
	| start   | -p NoKubernetes-721916             | NoKubernetes-721916       | jenkins | v1.33.1 | 29 Jul 24 14:21 UTC | 29 Jul 24 14:22 UTC |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-721916 sudo        | NoKubernetes-721916       | jenkins | v1.33.1 | 29 Jul 24 14:22 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-721916             | NoKubernetes-721916       | jenkins | v1.33.1 | 29 Jul 24 14:22 UTC | 29 Jul 24 14:22 UTC |
	| start   | -p cert-expiration-869983          | cert-expiration-869983    | jenkins | v1.33.1 | 29 Jul 24 14:22 UTC | 29 Jul 24 14:23 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-932740          | running-upgrade-932740    | jenkins | v1.33.1 | 29 Jul 24 14:22 UTC | 29 Jul 24 14:22 UTC |
	| start   | -p force-systemd-flag-956245       | force-systemd-flag-956245 | jenkins | v1.33.1 | 29 Jul 24 14:22 UTC | 29 Jul 24 14:23 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p pause-414966                    | pause-414966              | jenkins | v1.33.1 | 29 Jul 24 14:23 UTC | 29 Jul 24 14:24 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-956245 ssh cat  | force-systemd-flag-956245 | jenkins | v1.33.1 | 29 Jul 24 14:23 UTC | 29 Jul 24 14:23 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-956245       | force-systemd-flag-956245 | jenkins | v1.33.1 | 29 Jul 24 14:23 UTC | 29 Jul 24 14:23 UTC |
	| start   | -p cert-options-442776             | cert-options-442776       | jenkins | v1.33.1 | 29 Jul 24 14:23 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1          |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15      |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost        |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com   |                           |         |         |                     |                     |
	|         | --apiserver-port=8555              |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 14:23:37
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 14:23:37.822288 1023348 out.go:291] Setting OutFile to fd 1 ...
	I0729 14:23:37.822403 1023348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:23:37.822408 1023348 out.go:304] Setting ErrFile to fd 2...
	I0729 14:23:37.822411 1023348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:23:37.822587 1023348 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 14:23:37.823175 1023348 out.go:298] Setting JSON to false
	I0729 14:23:37.824269 1023348 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":14770,"bootTime":1722248248,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 14:23:37.824319 1023348 start.go:139] virtualization: kvm guest
	I0729 14:23:37.827206 1023348 out.go:177] * [cert-options-442776] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 14:23:37.828790 1023348 notify.go:220] Checking for updates...
	I0729 14:23:37.828811 1023348 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 14:23:37.830498 1023348 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 14:23:37.831931 1023348 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:23:37.833487 1023348 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:23:37.835154 1023348 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 14:23:37.836639 1023348 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 14:23:37.838497 1023348 config.go:182] Loaded profile config "cert-expiration-869983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:23:37.838577 1023348 config.go:182] Loaded profile config "kubernetes-upgrade-050658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 14:23:37.838680 1023348 config.go:182] Loaded profile config "pause-414966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:23:37.838763 1023348 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 14:23:37.875734 1023348 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 14:23:37.876958 1023348 start.go:297] selected driver: kvm2
	I0729 14:23:37.876977 1023348 start.go:901] validating driver "kvm2" against <nil>
	I0729 14:23:37.876986 1023348 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 14:23:37.877661 1023348 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:23:37.877722 1023348 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19338-974764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 14:23:37.893415 1023348 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 14:23:37.893452 1023348 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 14:23:37.893673 1023348 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 14:23:37.893722 1023348 cni.go:84] Creating CNI manager for ""
	I0729 14:23:37.893730 1023348 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:23:37.893736 1023348 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 14:23:37.893806 1023348 start.go:340] cluster config:
	{Name:cert-options-442776 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:cert-options-442776 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.
1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0729 14:23:37.893910 1023348 iso.go:125] acquiring lock: {Name:mk2bc72146110e230952d77b90cad2ea8182c9d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:23:37.895818 1023348 out.go:177] * Starting "cert-options-442776" primary control-plane node in "cert-options-442776" cluster
	I0729 14:23:37.897268 1023348 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 14:23:37.897296 1023348 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 14:23:37.897302 1023348 cache.go:56] Caching tarball of preloaded images
	I0729 14:23:37.897385 1023348 preload.go:172] Found /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 14:23:37.897392 1023348 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 14:23:37.897491 1023348 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/cert-options-442776/config.json ...
	I0729 14:23:37.897504 1023348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/cert-options-442776/config.json: {Name:mkb799c6fa34df2f36b22f7ab03cf322b62992b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:23:37.897624 1023348 start.go:360] acquireMachinesLock for cert-options-442776: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 14:23:37.897657 1023348 start.go:364] duration metric: took 17.003µs to acquireMachinesLock for "cert-options-442776"
	I0729 14:23:37.897670 1023348 start.go:93] Provisioning new machine with config: &{Name:cert-options-442776 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.30.3 ClusterName:cert-options-442776 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:23:37.897714 1023348 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 14:23:39.201825 1023077 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 035d53d316c3b2862b6d13703802b4be52b5d9820739c3b42bbaf79c8ba91f16 8c1298e8b84ad3b0261c3dfa53b6a2d6f814335046ead4ebebfdb2d14fba959c 8135771c948561a1046c0e9adac1481fe77937def69d866e233bfb24215094ee 692a1cc699d0a690cd53786a86df00cfd6bcff1dcfd05e18a9f530ef5c585ce8 f29c0422b10cd894aa147915e7eb22a41e8be365adb664295b301163d79cd6fd 1b4363a555ed2549ec21bd47edcd8dfe6d69560aa54510cdca5e9f15a1472466 594d17490a04d5ffe49bbf6c7987154d651e6110bbeea4dacc652c85edf3d360: (9.772261526s)
	I0729 14:23:39.201911 1023077 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 14:23:39.245235 1023077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:23:39.256763 1023077 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Jul 29 14:22 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Jul 29 14:22 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Jul 29 14:22 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Jul 29 14:22 /etc/kubernetes/scheduler.conf
	
	I0729 14:23:39.256841 1023077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:23:39.266710 1023077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:23:39.276692 1023077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:23:39.286281 1023077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:23:39.286355 1023077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:23:39.296449 1023077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:23:39.306079 1023077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:23:39.306155 1023077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:23:39.316781 1023077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:23:39.327238 1023077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:23:39.393042 1023077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:23:40.052024 1023077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:23:40.304129 1023077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:23:40.412862 1023077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:23:40.593265 1023077 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:23:40.593374 1023077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:23:41.093868 1023077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:23:41.125849 1023077 api_server.go:72] duration metric: took 532.585175ms to wait for apiserver process to appear ...
	I0729 14:23:41.125898 1023077 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:23:41.125922 1023077 api_server.go:253] Checking apiserver healthz at https://192.168.50.133:8443/healthz ...
	I0729 14:23:41.126475 1023077 api_server.go:269] stopped: https://192.168.50.133:8443/healthz: Get "https://192.168.50.133:8443/healthz": dial tcp 192.168.50.133:8443: connect: connection refused
	I0729 14:23:37.899397 1023348 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 14:23:37.899527 1023348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:23:37.899559 1023348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:23:37.913532 1023348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46181
	I0729 14:23:37.913972 1023348 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:23:37.914515 1023348 main.go:141] libmachine: Using API Version  1
	I0729 14:23:37.914529 1023348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:23:37.914857 1023348 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:23:37.915032 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetMachineName
	I0729 14:23:37.915159 1023348 main.go:141] libmachine: (cert-options-442776) Calling .DriverName
	I0729 14:23:37.915281 1023348 start.go:159] libmachine.API.Create for "cert-options-442776" (driver="kvm2")
	I0729 14:23:37.915305 1023348 client.go:168] LocalClient.Create starting
	I0729 14:23:37.915338 1023348 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem
	I0729 14:23:37.915370 1023348 main.go:141] libmachine: Decoding PEM data...
	I0729 14:23:37.915385 1023348 main.go:141] libmachine: Parsing certificate...
	I0729 14:23:37.915450 1023348 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem
	I0729 14:23:37.915465 1023348 main.go:141] libmachine: Decoding PEM data...
	I0729 14:23:37.915474 1023348 main.go:141] libmachine: Parsing certificate...
	I0729 14:23:37.915485 1023348 main.go:141] libmachine: Running pre-create checks...
	I0729 14:23:37.915495 1023348 main.go:141] libmachine: (cert-options-442776) Calling .PreCreateCheck
	I0729 14:23:37.915813 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetConfigRaw
	I0729 14:23:37.916200 1023348 main.go:141] libmachine: Creating machine...
	I0729 14:23:37.916206 1023348 main.go:141] libmachine: (cert-options-442776) Calling .Create
	I0729 14:23:37.916358 1023348 main.go:141] libmachine: (cert-options-442776) Creating KVM machine...
	I0729 14:23:37.917525 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found existing default KVM network
	I0729 14:23:37.918995 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:37.918823 1023371 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:10:c4:5a} reservation:<nil>}
	I0729 14:23:37.920151 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:37.920073 1023371 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:83:cd:4b} reservation:<nil>}
	I0729 14:23:37.921373 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:37.921277 1023371 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:17:03:6b} reservation:<nil>}
	I0729 14:23:37.923619 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:37.923503 1023371 network.go:209] skipping subnet 192.168.72.0/24 that is reserved: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 14:23:37.924876 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:37.924804 1023371 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000112c40}
	I0729 14:23:37.924902 1023348 main.go:141] libmachine: (cert-options-442776) DBG | created network xml: 
	I0729 14:23:37.924921 1023348 main.go:141] libmachine: (cert-options-442776) DBG | <network>
	I0729 14:23:37.924937 1023348 main.go:141] libmachine: (cert-options-442776) DBG |   <name>mk-cert-options-442776</name>
	I0729 14:23:37.924948 1023348 main.go:141] libmachine: (cert-options-442776) DBG |   <dns enable='no'/>
	I0729 14:23:37.924955 1023348 main.go:141] libmachine: (cert-options-442776) DBG |   
	I0729 14:23:37.924963 1023348 main.go:141] libmachine: (cert-options-442776) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I0729 14:23:37.924970 1023348 main.go:141] libmachine: (cert-options-442776) DBG |     <dhcp>
	I0729 14:23:37.924978 1023348 main.go:141] libmachine: (cert-options-442776) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I0729 14:23:37.924984 1023348 main.go:141] libmachine: (cert-options-442776) DBG |     </dhcp>
	I0729 14:23:37.924989 1023348 main.go:141] libmachine: (cert-options-442776) DBG |   </ip>
	I0729 14:23:37.924994 1023348 main.go:141] libmachine: (cert-options-442776) DBG |   
	I0729 14:23:37.924998 1023348 main.go:141] libmachine: (cert-options-442776) DBG | </network>
	I0729 14:23:37.925006 1023348 main.go:141] libmachine: (cert-options-442776) DBG | 
	I0729 14:23:37.930739 1023348 main.go:141] libmachine: (cert-options-442776) DBG | trying to create private KVM network mk-cert-options-442776 192.168.83.0/24...
	I0729 14:23:37.998543 1023348 main.go:141] libmachine: (cert-options-442776) DBG | private KVM network mk-cert-options-442776 192.168.83.0/24 created
	I0729 14:23:37.998633 1023348 main.go:141] libmachine: (cert-options-442776) Setting up store path in /home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776 ...
	I0729 14:23:37.998665 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:37.998503 1023371 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:23:37.998683 1023348 main.go:141] libmachine: (cert-options-442776) Building disk image from file:///home/jenkins/minikube-integration/19338-974764/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 14:23:37.998707 1023348 main.go:141] libmachine: (cert-options-442776) Downloading /home/jenkins/minikube-integration/19338-974764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19338-974764/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 14:23:38.260689 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:38.260495 1023371 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776/id_rsa...
	I0729 14:23:38.414988 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:38.414828 1023371 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776/cert-options-442776.rawdisk...
	I0729 14:23:38.415015 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Writing magic tar header
	I0729 14:23:38.415033 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Writing SSH key tar header
	I0729 14:23:38.415044 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:38.414981 1023371 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776 ...
	I0729 14:23:38.415156 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776
	I0729 14:23:38.415180 1023348 main.go:141] libmachine: (cert-options-442776) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776 (perms=drwx------)
	I0729 14:23:38.415191 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube/machines
	I0729 14:23:38.415202 1023348 main.go:141] libmachine: (cert-options-442776) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube/machines (perms=drwxr-xr-x)
	I0729 14:23:38.415214 1023348 main.go:141] libmachine: (cert-options-442776) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube (perms=drwxr-xr-x)
	I0729 14:23:38.415221 1023348 main.go:141] libmachine: (cert-options-442776) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764 (perms=drwxrwxr-x)
	I0729 14:23:38.415234 1023348 main.go:141] libmachine: (cert-options-442776) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 14:23:38.415241 1023348 main.go:141] libmachine: (cert-options-442776) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 14:23:38.415249 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:23:38.415273 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764
	I0729 14:23:38.415281 1023348 main.go:141] libmachine: (cert-options-442776) Creating domain...
	I0729 14:23:38.415288 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 14:23:38.415295 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Checking permissions on dir: /home/jenkins
	I0729 14:23:38.415308 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Checking permissions on dir: /home
	I0729 14:23:38.415318 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Skipping /home - not owner
	I0729 14:23:38.416558 1023348 main.go:141] libmachine: (cert-options-442776) define libvirt domain using xml: 
	I0729 14:23:38.416591 1023348 main.go:141] libmachine: (cert-options-442776) <domain type='kvm'>
	I0729 14:23:38.416600 1023348 main.go:141] libmachine: (cert-options-442776)   <name>cert-options-442776</name>
	I0729 14:23:38.416607 1023348 main.go:141] libmachine: (cert-options-442776)   <memory unit='MiB'>2048</memory>
	I0729 14:23:38.416614 1023348 main.go:141] libmachine: (cert-options-442776)   <vcpu>2</vcpu>
	I0729 14:23:38.416628 1023348 main.go:141] libmachine: (cert-options-442776)   <features>
	I0729 14:23:38.416635 1023348 main.go:141] libmachine: (cert-options-442776)     <acpi/>
	I0729 14:23:38.416640 1023348 main.go:141] libmachine: (cert-options-442776)     <apic/>
	I0729 14:23:38.416646 1023348 main.go:141] libmachine: (cert-options-442776)     <pae/>
	I0729 14:23:38.416651 1023348 main.go:141] libmachine: (cert-options-442776)     
	I0729 14:23:38.416658 1023348 main.go:141] libmachine: (cert-options-442776)   </features>
	I0729 14:23:38.416663 1023348 main.go:141] libmachine: (cert-options-442776)   <cpu mode='host-passthrough'>
	I0729 14:23:38.416669 1023348 main.go:141] libmachine: (cert-options-442776)   
	I0729 14:23:38.416678 1023348 main.go:141] libmachine: (cert-options-442776)   </cpu>
	I0729 14:23:38.416699 1023348 main.go:141] libmachine: (cert-options-442776)   <os>
	I0729 14:23:38.416707 1023348 main.go:141] libmachine: (cert-options-442776)     <type>hvm</type>
	I0729 14:23:38.416712 1023348 main.go:141] libmachine: (cert-options-442776)     <boot dev='cdrom'/>
	I0729 14:23:38.416716 1023348 main.go:141] libmachine: (cert-options-442776)     <boot dev='hd'/>
	I0729 14:23:38.416726 1023348 main.go:141] libmachine: (cert-options-442776)     <bootmenu enable='no'/>
	I0729 14:23:38.416729 1023348 main.go:141] libmachine: (cert-options-442776)   </os>
	I0729 14:23:38.416733 1023348 main.go:141] libmachine: (cert-options-442776)   <devices>
	I0729 14:23:38.416737 1023348 main.go:141] libmachine: (cert-options-442776)     <disk type='file' device='cdrom'>
	I0729 14:23:38.416744 1023348 main.go:141] libmachine: (cert-options-442776)       <source file='/home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776/boot2docker.iso'/>
	I0729 14:23:38.416748 1023348 main.go:141] libmachine: (cert-options-442776)       <target dev='hdc' bus='scsi'/>
	I0729 14:23:38.416753 1023348 main.go:141] libmachine: (cert-options-442776)       <readonly/>
	I0729 14:23:38.416756 1023348 main.go:141] libmachine: (cert-options-442776)     </disk>
	I0729 14:23:38.416761 1023348 main.go:141] libmachine: (cert-options-442776)     <disk type='file' device='disk'>
	I0729 14:23:38.416766 1023348 main.go:141] libmachine: (cert-options-442776)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 14:23:38.416774 1023348 main.go:141] libmachine: (cert-options-442776)       <source file='/home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776/cert-options-442776.rawdisk'/>
	I0729 14:23:38.416781 1023348 main.go:141] libmachine: (cert-options-442776)       <target dev='hda' bus='virtio'/>
	I0729 14:23:38.416785 1023348 main.go:141] libmachine: (cert-options-442776)     </disk>
	I0729 14:23:38.416792 1023348 main.go:141] libmachine: (cert-options-442776)     <interface type='network'>
	I0729 14:23:38.416797 1023348 main.go:141] libmachine: (cert-options-442776)       <source network='mk-cert-options-442776'/>
	I0729 14:23:38.416803 1023348 main.go:141] libmachine: (cert-options-442776)       <model type='virtio'/>
	I0729 14:23:38.416808 1023348 main.go:141] libmachine: (cert-options-442776)     </interface>
	I0729 14:23:38.416814 1023348 main.go:141] libmachine: (cert-options-442776)     <interface type='network'>
	I0729 14:23:38.416819 1023348 main.go:141] libmachine: (cert-options-442776)       <source network='default'/>
	I0729 14:23:38.416822 1023348 main.go:141] libmachine: (cert-options-442776)       <model type='virtio'/>
	I0729 14:23:38.416826 1023348 main.go:141] libmachine: (cert-options-442776)     </interface>
	I0729 14:23:38.416830 1023348 main.go:141] libmachine: (cert-options-442776)     <serial type='pty'>
	I0729 14:23:38.416834 1023348 main.go:141] libmachine: (cert-options-442776)       <target port='0'/>
	I0729 14:23:38.416837 1023348 main.go:141] libmachine: (cert-options-442776)     </serial>
	I0729 14:23:38.416841 1023348 main.go:141] libmachine: (cert-options-442776)     <console type='pty'>
	I0729 14:23:38.416844 1023348 main.go:141] libmachine: (cert-options-442776)       <target type='serial' port='0'/>
	I0729 14:23:38.416857 1023348 main.go:141] libmachine: (cert-options-442776)     </console>
	I0729 14:23:38.416866 1023348 main.go:141] libmachine: (cert-options-442776)     <rng model='virtio'>
	I0729 14:23:38.416871 1023348 main.go:141] libmachine: (cert-options-442776)       <backend model='random'>/dev/random</backend>
	I0729 14:23:38.416874 1023348 main.go:141] libmachine: (cert-options-442776)     </rng>
	I0729 14:23:38.416878 1023348 main.go:141] libmachine: (cert-options-442776)     
	I0729 14:23:38.416881 1023348 main.go:141] libmachine: (cert-options-442776)     
	I0729 14:23:38.416885 1023348 main.go:141] libmachine: (cert-options-442776)   </devices>
	I0729 14:23:38.416888 1023348 main.go:141] libmachine: (cert-options-442776) </domain>
	I0729 14:23:38.416895 1023348 main.go:141] libmachine: (cert-options-442776) 
	I0729 14:23:38.421890 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:36:3e:89 in network default
	I0729 14:23:38.422552 1023348 main.go:141] libmachine: (cert-options-442776) Ensuring networks are active...
	I0729 14:23:38.422576 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:38.423268 1023348 main.go:141] libmachine: (cert-options-442776) Ensuring network default is active
	I0729 14:23:38.423709 1023348 main.go:141] libmachine: (cert-options-442776) Ensuring network mk-cert-options-442776 is active
	I0729 14:23:38.424237 1023348 main.go:141] libmachine: (cert-options-442776) Getting domain xml...
	I0729 14:23:38.424976 1023348 main.go:141] libmachine: (cert-options-442776) Creating domain...
	I0729 14:23:38.745900 1023348 main.go:141] libmachine: (cert-options-442776) Waiting to get IP...
	I0729 14:23:38.746585 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:38.747071 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:38.747103 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:38.747048 1023371 retry.go:31] will retry after 302.81585ms: waiting for machine to come up
	I0729 14:23:39.051632 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:39.052146 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:39.052168 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:39.052089 1023371 retry.go:31] will retry after 318.183671ms: waiting for machine to come up
	I0729 14:23:39.371631 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:39.372289 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:39.372318 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:39.372213 1023371 retry.go:31] will retry after 385.473463ms: waiting for machine to come up
	I0729 14:23:39.758958 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:39.759477 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:39.759501 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:39.759417 1023371 retry.go:31] will retry after 491.502129ms: waiting for machine to come up
	I0729 14:23:40.252171 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:40.252770 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:40.252793 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:40.252701 1023371 retry.go:31] will retry after 648.148309ms: waiting for machine to come up
	I0729 14:23:40.902788 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:40.903274 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:40.903336 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:40.903235 1023371 retry.go:31] will retry after 872.321979ms: waiting for machine to come up
	I0729 14:23:41.777537 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:41.778023 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:41.778041 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:41.778001 1023371 retry.go:31] will retry after 898.968832ms: waiting for machine to come up
	I0729 14:23:42.678598 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:42.678992 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:42.679015 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:42.678932 1023371 retry.go:31] will retry after 1.366079761s: waiting for machine to come up
	I0729 14:23:41.626605 1023077 api_server.go:253] Checking apiserver healthz at https://192.168.50.133:8443/healthz ...
	I0729 14:23:44.461435 1023077 api_server.go:279] https://192.168.50.133:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:23:44.461469 1023077 api_server.go:103] status: https://192.168.50.133:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:23:44.461485 1023077 api_server.go:253] Checking apiserver healthz at https://192.168.50.133:8443/healthz ...
	I0729 14:23:44.491343 1023077 api_server.go:279] https://192.168.50.133:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:23:44.491381 1023077 api_server.go:103] status: https://192.168.50.133:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:23:44.626625 1023077 api_server.go:253] Checking apiserver healthz at https://192.168.50.133:8443/healthz ...
	I0729 14:23:44.631551 1023077 api_server.go:279] https://192.168.50.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:23:44.631594 1023077 api_server.go:103] status: https://192.168.50.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:23:45.126140 1023077 api_server.go:253] Checking apiserver healthz at https://192.168.50.133:8443/healthz ...
	I0729 14:23:45.130809 1023077 api_server.go:279] https://192.168.50.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:23:45.130848 1023077 api_server.go:103] status: https://192.168.50.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:23:45.626525 1023077 api_server.go:253] Checking apiserver healthz at https://192.168.50.133:8443/healthz ...
	I0729 14:23:45.632787 1023077 api_server.go:279] https://192.168.50.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:23:45.632817 1023077 api_server.go:103] status: https://192.168.50.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:23:46.126177 1023077 api_server.go:253] Checking apiserver healthz at https://192.168.50.133:8443/healthz ...
	I0729 14:23:46.132204 1023077 api_server.go:279] https://192.168.50.133:8443/healthz returned 200:
	ok
	I0729 14:23:46.139857 1023077 api_server.go:141] control plane version: v1.30.3
	I0729 14:23:46.139888 1023077 api_server.go:131] duration metric: took 5.013982606s to wait for apiserver health ...
	I0729 14:23:46.139899 1023077 cni.go:84] Creating CNI manager for ""
	I0729 14:23:46.139908 1023077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:23:46.142195 1023077 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:23:46.143970 1023077 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:23:46.155651 1023077 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:23:46.174812 1023077 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:23:46.192007 1023077 system_pods.go:59] 6 kube-system pods found
	I0729 14:23:46.192046 1023077 system_pods.go:61] "coredns-7db6d8ff4d-5pskj" [2464169b-59c6-4285-a694-b80fa182e201] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 14:23:46.192058 1023077 system_pods.go:61] "etcd-pause-414966" [d9179f9e-aafe-4b1f-80b1-dd18a3af76d1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 14:23:46.192067 1023077 system_pods.go:61] "kube-apiserver-pause-414966" [c763bb96-2d45-48c7-a710-7bf74b3c731e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 14:23:46.192076 1023077 system_pods.go:61] "kube-controller-manager-pause-414966" [c112b4ee-82c6-4099-a728-c9698c7dd8df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 14:23:46.192083 1023077 system_pods.go:61] "kube-proxy-2dhx5" [f848393f-5676-41d7-b9ba-1959514af9da] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 14:23:46.192090 1023077 system_pods.go:61] "kube-scheduler-pause-414966" [9cd2428e-4c42-4153-84e1-683103b5640f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 14:23:46.192098 1023077 system_pods.go:74] duration metric: took 17.261835ms to wait for pod list to return data ...
	I0729 14:23:46.192108 1023077 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:23:46.195395 1023077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:23:46.195424 1023077 node_conditions.go:123] node cpu capacity is 2
	I0729 14:23:46.195437 1023077 node_conditions.go:105] duration metric: took 3.32314ms to run NodePressure ...
	I0729 14:23:46.195479 1023077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:23:46.460099 1023077 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 14:23:44.047381 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:44.047952 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:44.047975 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:44.047885 1023371 retry.go:31] will retry after 1.497047858s: waiting for machine to come up
	I0729 14:23:45.547467 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:45.548055 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:45.548080 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:45.548005 1023371 retry.go:31] will retry after 1.432302652s: waiting for machine to come up
	I0729 14:23:46.982374 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:46.982884 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:46.982912 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:46.982807 1023371 retry.go:31] will retry after 2.77354977s: waiting for machine to come up
	I0729 14:23:46.465323 1023077 kubeadm.go:739] kubelet initialised
	I0729 14:23:46.465350 1023077 kubeadm.go:740] duration metric: took 5.216165ms waiting for restarted kubelet to initialise ...
	I0729 14:23:46.465369 1023077 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:23:46.471142 1023077 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-5pskj" in "kube-system" namespace to be "Ready" ...
	I0729 14:23:46.978477 1023077 pod_ready.go:92] pod "coredns-7db6d8ff4d-5pskj" in "kube-system" namespace has status "Ready":"True"
	I0729 14:23:46.978506 1023077 pod_ready.go:81] duration metric: took 507.333647ms for pod "coredns-7db6d8ff4d-5pskj" in "kube-system" namespace to be "Ready" ...
	I0729 14:23:46.978520 1023077 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:23:48.986524 1023077 pod_ready.go:102] pod "etcd-pause-414966" in "kube-system" namespace has status "Ready":"False"
	I0729 14:23:49.757611 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:49.758097 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:49.758115 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:49.758037 1023371 retry.go:31] will retry after 2.40654272s: waiting for machine to come up
	I0729 14:23:52.165935 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:52.166308 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:52.166325 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:52.166263 1023371 retry.go:31] will retry after 3.450920562s: waiting for machine to come up
	I0729 14:23:51.484961 1023077 pod_ready.go:102] pod "etcd-pause-414966" in "kube-system" namespace has status "Ready":"False"
	I0729 14:23:53.485238 1023077 pod_ready.go:102] pod "etcd-pause-414966" in "kube-system" namespace has status "Ready":"False"
	I0729 14:23:55.985411 1023077 pod_ready.go:102] pod "etcd-pause-414966" in "kube-system" namespace has status "Ready":"False"
	I0729 14:23:55.620205 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:55.620706 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:55.620875 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:55.620696 1023371 retry.go:31] will retry after 5.00068248s: waiting for machine to come up
	I0729 14:23:57.985576 1023077 pod_ready.go:102] pod "etcd-pause-414966" in "kube-system" namespace has status "Ready":"False"
	I0729 14:24:00.487011 1023077 pod_ready.go:92] pod "etcd-pause-414966" in "kube-system" namespace has status "Ready":"True"
	I0729 14:24:00.487040 1023077 pod_ready.go:81] duration metric: took 13.50851267s for pod "etcd-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:00.487052 1023077 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:00.493229 1023077 pod_ready.go:92] pod "kube-apiserver-pause-414966" in "kube-system" namespace has status "Ready":"True"
	I0729 14:24:00.493250 1023077 pod_ready.go:81] duration metric: took 6.191634ms for pod "kube-apiserver-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:00.493260 1023077 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:01.000826 1023077 pod_ready.go:92] pod "kube-controller-manager-pause-414966" in "kube-system" namespace has status "Ready":"True"
	I0729 14:24:01.000866 1023077 pod_ready.go:81] duration metric: took 507.589786ms for pod "kube-controller-manager-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:01.000881 1023077 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2dhx5" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:01.005663 1023077 pod_ready.go:92] pod "kube-proxy-2dhx5" in "kube-system" namespace has status "Ready":"True"
	I0729 14:24:01.005681 1023077 pod_ready.go:81] duration metric: took 4.792457ms for pod "kube-proxy-2dhx5" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:01.005689 1023077 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:01.009721 1023077 pod_ready.go:92] pod "kube-scheduler-pause-414966" in "kube-system" namespace has status "Ready":"True"
	I0729 14:24:01.009739 1023077 pod_ready.go:81] duration metric: took 4.042575ms for pod "kube-scheduler-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:01.009745 1023077 pod_ready.go:38] duration metric: took 14.544364047s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:24:01.009762 1023077 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 14:24:01.024748 1023077 ops.go:34] apiserver oom_adj: -16
	I0729 14:24:01.024771 1023077 kubeadm.go:597] duration metric: took 31.679282749s to restartPrimaryControlPlane
	I0729 14:24:01.024780 1023077 kubeadm.go:394] duration metric: took 31.835205899s to StartCluster
	I0729 14:24:01.024802 1023077 settings.go:142] acquiring lock: {Name:mke61e73d7bb1a5bd9c2f4c9e9bba0a07b199ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:24:01.024890 1023077 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:24:01.025786 1023077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:24:01.026023 1023077 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:24:01.026100 1023077 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 14:24:01.026267 1023077 config.go:182] Loaded profile config "pause-414966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:24:01.027793 1023077 out.go:177] * Verifying Kubernetes components...
	I0729 14:24:01.027796 1023077 out.go:177] * Enabled addons: 
	I0729 14:24:01.029092 1023077 addons.go:510] duration metric: took 2.995656ms for enable addons: enabled=[]
	I0729 14:24:01.029151 1023077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:24:01.200858 1023077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:24:01.220552 1023077 node_ready.go:35] waiting up to 6m0s for node "pause-414966" to be "Ready" ...
	I0729 14:24:01.223753 1023077 node_ready.go:49] node "pause-414966" has status "Ready":"True"
	I0729 14:24:01.223775 1023077 node_ready.go:38] duration metric: took 3.188285ms for node "pause-414966" to be "Ready" ...
	I0729 14:24:01.223785 1023077 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:24:01.285918 1023077 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5pskj" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:00.623602 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:00.624104 1023348 main.go:141] libmachine: (cert-options-442776) Found IP for machine: 192.168.83.83
	I0729 14:24:00.624127 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has current primary IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:00.624136 1023348 main.go:141] libmachine: (cert-options-442776) Reserving static IP address...
	I0729 14:24:00.624551 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find host DHCP lease matching {name: "cert-options-442776", mac: "52:54:00:1e:8c:65", ip: "192.168.83.83"} in network mk-cert-options-442776
	I0729 14:24:00.697535 1023348 main.go:141] libmachine: (cert-options-442776) Reserved static IP address: 192.168.83.83
	I0729 14:24:00.697559 1023348 main.go:141] libmachine: (cert-options-442776) Waiting for SSH to be available...
	I0729 14:24:00.697568 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Getting to WaitForSSH function...
	I0729 14:24:00.700095 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:00.700539 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:00.700565 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:00.700747 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Using SSH client type: external
	I0729 14:24:00.700761 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776/id_rsa (-rw-------)
	I0729 14:24:00.700815 1023348 main.go:141] libmachine: (cert-options-442776) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.83 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:24:00.700834 1023348 main.go:141] libmachine: (cert-options-442776) DBG | About to run SSH command:
	I0729 14:24:00.700847 1023348 main.go:141] libmachine: (cert-options-442776) DBG | exit 0
	I0729 14:24:00.828338 1023348 main.go:141] libmachine: (cert-options-442776) DBG | SSH cmd err, output: <nil>: 
	I0729 14:24:00.828657 1023348 main.go:141] libmachine: (cert-options-442776) KVM machine creation complete!
	I0729 14:24:00.829045 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetConfigRaw
	I0729 14:24:00.829546 1023348 main.go:141] libmachine: (cert-options-442776) Calling .DriverName
	I0729 14:24:00.829751 1023348 main.go:141] libmachine: (cert-options-442776) Calling .DriverName
	I0729 14:24:00.829880 1023348 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 14:24:00.829898 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetState
	I0729 14:24:00.831126 1023348 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 14:24:00.831135 1023348 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 14:24:00.831139 1023348 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 14:24:00.831144 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHHostname
	I0729 14:24:00.833711 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:00.834080 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:00.834096 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:00.834251 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHPort
	I0729 14:24:00.834442 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:00.834637 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:00.834778 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHUsername
	I0729 14:24:00.834916 1023348 main.go:141] libmachine: Using SSH client type: native
	I0729 14:24:00.835102 1023348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.83 22 <nil> <nil>}
	I0729 14:24:00.835107 1023348 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 14:24:00.947882 1023348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:24:00.947904 1023348 main.go:141] libmachine: Detecting the provisioner...
	I0729 14:24:00.947911 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHHostname
	I0729 14:24:00.950885 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:00.951240 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:00.951266 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:00.951470 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHPort
	I0729 14:24:00.951681 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:00.951850 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:00.951950 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHUsername
	I0729 14:24:00.952124 1023348 main.go:141] libmachine: Using SSH client type: native
	I0729 14:24:00.952284 1023348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.83 22 <nil> <nil>}
	I0729 14:24:00.952300 1023348 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 14:24:01.068979 1023348 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 14:24:01.069093 1023348 main.go:141] libmachine: found compatible host: buildroot
	I0729 14:24:01.069101 1023348 main.go:141] libmachine: Provisioning with buildroot...
	I0729 14:24:01.069111 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetMachineName
	I0729 14:24:01.069390 1023348 buildroot.go:166] provisioning hostname "cert-options-442776"
	I0729 14:24:01.069410 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetMachineName
	I0729 14:24:01.069621 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHHostname
	I0729 14:24:01.072101 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.072585 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:01.072620 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.072719 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHPort
	I0729 14:24:01.072899 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:01.073057 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:01.073192 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHUsername
	I0729 14:24:01.073364 1023348 main.go:141] libmachine: Using SSH client type: native
	I0729 14:24:01.073538 1023348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.83 22 <nil> <nil>}
	I0729 14:24:01.073552 1023348 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-options-442776 && echo "cert-options-442776" | sudo tee /etc/hostname
	I0729 14:24:01.198724 1023348 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-options-442776
	
	I0729 14:24:01.198743 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHHostname
	I0729 14:24:01.201864 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.202220 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:01.202248 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.202424 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHPort
	I0729 14:24:01.202633 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:01.202789 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:01.202914 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHUsername
	I0729 14:24:01.203053 1023348 main.go:141] libmachine: Using SSH client type: native
	I0729 14:24:01.203218 1023348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.83 22 <nil> <nil>}
	I0729 14:24:01.203229 1023348 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-options-442776' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-442776/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-options-442776' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:24:01.325604 1023348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:24:01.325633 1023348 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:24:01.325673 1023348 buildroot.go:174] setting up certificates
	I0729 14:24:01.325689 1023348 provision.go:84] configureAuth start
	I0729 14:24:01.325703 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetMachineName
	I0729 14:24:01.326036 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetIP
	I0729 14:24:01.329100 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.329477 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:01.329506 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.329668 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHHostname
	I0729 14:24:01.332091 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.332457 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:01.332478 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.332633 1023348 provision.go:143] copyHostCerts
	I0729 14:24:01.332680 1023348 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:24:01.332686 1023348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:24:01.332747 1023348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:24:01.332847 1023348 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:24:01.332850 1023348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:24:01.332878 1023348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:24:01.332936 1023348 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:24:01.332938 1023348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:24:01.332957 1023348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:24:01.333016 1023348 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.cert-options-442776 san=[127.0.0.1 192.168.83.83 cert-options-442776 localhost minikube]
	I0729 14:24:01.514202 1023348 provision.go:177] copyRemoteCerts
	I0729 14:24:01.514255 1023348 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:24:01.514296 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHHostname
	I0729 14:24:01.517086 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.517415 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:01.517435 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.517605 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHPort
	I0729 14:24:01.517785 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:01.517939 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHUsername
	I0729 14:24:01.518061 1023348 sshutil.go:53] new ssh client: &{IP:192.168.83.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776/id_rsa Username:docker}
	I0729 14:24:01.602784 1023348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:24:01.627806 1023348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 14:24:01.652155 1023348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:24:01.675741 1023348 provision.go:87] duration metric: took 350.039931ms to configureAuth
	I0729 14:24:01.675760 1023348 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:24:01.675954 1023348 config.go:182] Loaded profile config "cert-options-442776": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:24:01.676034 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHHostname
	I0729 14:24:01.678845 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.679154 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:01.679173 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.679330 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHPort
	I0729 14:24:01.679532 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:01.679704 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:01.679894 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHUsername
	I0729 14:24:01.680060 1023348 main.go:141] libmachine: Using SSH client type: native
	I0729 14:24:01.680227 1023348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.83 22 <nil> <nil>}
	I0729 14:24:01.680237 1023348 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:24:01.952519 1023348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:24:01.952546 1023348 main.go:141] libmachine: Checking connection to Docker...
	I0729 14:24:01.952556 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetURL
	I0729 14:24:01.953971 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Using libvirt version 6000000
	I0729 14:24:01.956386 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.956742 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:01.956765 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.956945 1023348 main.go:141] libmachine: Docker is up and running!
	I0729 14:24:01.956955 1023348 main.go:141] libmachine: Reticulating splines...
	I0729 14:24:01.956961 1023348 client.go:171] duration metric: took 24.041650515s to LocalClient.Create
	I0729 14:24:01.956987 1023348 start.go:167] duration metric: took 24.041706148s to libmachine.API.Create "cert-options-442776"
	I0729 14:24:01.956995 1023348 start.go:293] postStartSetup for "cert-options-442776" (driver="kvm2")
	I0729 14:24:01.957006 1023348 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:24:01.957025 1023348 main.go:141] libmachine: (cert-options-442776) Calling .DriverName
	I0729 14:24:01.957281 1023348 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:24:01.957302 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHHostname
	I0729 14:24:01.959388 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.959706 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:01.959728 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.959853 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHPort
	I0729 14:24:01.960051 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:01.960222 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHUsername
	I0729 14:24:01.960364 1023348 sshutil.go:53] new ssh client: &{IP:192.168.83.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776/id_rsa Username:docker}
	I0729 14:24:02.046694 1023348 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:24:02.051009 1023348 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:24:02.051029 1023348 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:24:02.051098 1023348 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:24:02.051180 1023348 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:24:02.051284 1023348 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:24:02.060434 1023348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:24:02.087037 1023348 start.go:296] duration metric: took 130.027636ms for postStartSetup
	I0729 14:24:02.087075 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetConfigRaw
	I0729 14:24:02.087648 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetIP
	I0729 14:24:02.090309 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:02.090710 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:02.090744 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:02.091018 1023348 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/cert-options-442776/config.json ...
	I0729 14:24:02.091200 1023348 start.go:128] duration metric: took 24.193478851s to createHost
	I0729 14:24:02.091215 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHHostname
	I0729 14:24:02.093357 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:02.093739 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:02.093760 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:02.093912 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHPort
	I0729 14:24:02.094086 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:02.094272 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:02.094409 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHUsername
	I0729 14:24:02.094544 1023348 main.go:141] libmachine: Using SSH client type: native
	I0729 14:24:02.094721 1023348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.83 22 <nil> <nil>}
	I0729 14:24:02.094726 1023348 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:24:02.209146 1023348 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263042.185538996
	
	I0729 14:24:02.209163 1023348 fix.go:216] guest clock: 1722263042.185538996
	I0729 14:24:02.209169 1023348 fix.go:229] Guest: 2024-07-29 14:24:02.185538996 +0000 UTC Remote: 2024-07-29 14:24:02.091205534 +0000 UTC m=+24.306330108 (delta=94.333462ms)
	I0729 14:24:02.209193 1023348 fix.go:200] guest clock delta is within tolerance: 94.333462ms
	I0729 14:24:02.209204 1023348 start.go:83] releasing machines lock for "cert-options-442776", held for 24.311540445s
	I0729 14:24:02.209225 1023348 main.go:141] libmachine: (cert-options-442776) Calling .DriverName
	I0729 14:24:02.209544 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetIP
	I0729 14:24:02.212442 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:02.212866 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:02.212899 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:02.213037 1023348 main.go:141] libmachine: (cert-options-442776) Calling .DriverName
	I0729 14:24:02.213540 1023348 main.go:141] libmachine: (cert-options-442776) Calling .DriverName
	I0729 14:24:02.213762 1023348 main.go:141] libmachine: (cert-options-442776) Calling .DriverName
	I0729 14:24:02.213834 1023348 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:24:02.213896 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHHostname
	I0729 14:24:02.214037 1023348 ssh_runner.go:195] Run: cat /version.json
	I0729 14:24:02.214056 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHHostname
	I0729 14:24:02.216549 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:02.216933 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:02.216958 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:02.216975 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:02.217206 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHPort
	I0729 14:24:02.217393 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:02.217480 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:02.217495 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:02.217663 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHPort
	I0729 14:24:02.217687 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHUsername
	I0729 14:24:02.217859 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:02.217886 1023348 sshutil.go:53] new ssh client: &{IP:192.168.83.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776/id_rsa Username:docker}
	I0729 14:24:02.218004 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHUsername
	I0729 14:24:02.218137 1023348 sshutil.go:53] new ssh client: &{IP:192.168.83.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776/id_rsa Username:docker}
	I0729 14:24:02.297098 1023348 ssh_runner.go:195] Run: systemctl --version
	I0729 14:24:02.320345 1023348 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:24:02.486089 1023348 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:24:02.492007 1023348 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:24:02.492071 1023348 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:24:02.508741 1023348 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:24:02.508764 1023348 start.go:495] detecting cgroup driver to use...
	I0729 14:24:02.508819 1023348 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:24:02.525797 1023348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:24:02.540718 1023348 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:24:02.540760 1023348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:24:02.555144 1023348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:24:02.570325 1023348 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:24:02.689020 1023348 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:24:02.842816 1023348 docker.go:233] disabling docker service ...
	I0729 14:24:02.842896 1023348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:24:02.858242 1023348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:24:02.871154 1023348 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:24:02.995569 1023348 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:24:03.126554 1023348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:24:03.140550 1023348 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:24:03.159288 1023348 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 14:24:03.159352 1023348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:24:03.170399 1023348 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:24:03.170456 1023348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:24:03.182170 1023348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:24:03.193053 1023348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:24:03.203248 1023348 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:24:03.213990 1023348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:24:03.223942 1023348 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:24:03.240960 1023348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:24:03.252797 1023348 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:24:03.263592 1023348 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:24:03.263631 1023348 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:24:03.278209 1023348 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:24:03.288262 1023348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:24:03.417216 1023348 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:24:03.572810 1023348 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:24:03.572873 1023348 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:24:03.578191 1023348 start.go:563] Will wait 60s for crictl version
	I0729 14:24:03.578239 1023348 ssh_runner.go:195] Run: which crictl
	I0729 14:24:03.581925 1023348 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:24:03.631978 1023348 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:24:03.632063 1023348 ssh_runner.go:195] Run: crio --version
	I0729 14:24:03.660364 1023348 ssh_runner.go:195] Run: crio --version
	I0729 14:24:03.689600 1023348 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 14:24:01.683072 1023077 pod_ready.go:92] pod "coredns-7db6d8ff4d-5pskj" in "kube-system" namespace has status "Ready":"True"
	I0729 14:24:01.683099 1023077 pod_ready.go:81] duration metric: took 397.149379ms for pod "coredns-7db6d8ff4d-5pskj" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:01.683115 1023077 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:02.082866 1023077 pod_ready.go:92] pod "etcd-pause-414966" in "kube-system" namespace has status "Ready":"True"
	I0729 14:24:02.082895 1023077 pod_ready.go:81] duration metric: took 399.771982ms for pod "etcd-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:02.082905 1023077 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:02.482854 1023077 pod_ready.go:92] pod "kube-apiserver-pause-414966" in "kube-system" namespace has status "Ready":"True"
	I0729 14:24:02.482886 1023077 pod_ready.go:81] duration metric: took 399.974762ms for pod "kube-apiserver-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:02.482897 1023077 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:02.883313 1023077 pod_ready.go:92] pod "kube-controller-manager-pause-414966" in "kube-system" namespace has status "Ready":"True"
	I0729 14:24:02.883339 1023077 pod_ready.go:81] duration metric: took 400.434931ms for pod "kube-controller-manager-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:02.883352 1023077 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2dhx5" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:03.282556 1023077 pod_ready.go:92] pod "kube-proxy-2dhx5" in "kube-system" namespace has status "Ready":"True"
	I0729 14:24:03.282590 1023077 pod_ready.go:81] duration metric: took 399.229037ms for pod "kube-proxy-2dhx5" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:03.282606 1023077 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:03.682074 1023077 pod_ready.go:92] pod "kube-scheduler-pause-414966" in "kube-system" namespace has status "Ready":"True"
	I0729 14:24:03.682101 1023077 pod_ready.go:81] duration metric: took 399.487813ms for pod "kube-scheduler-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:03.682109 1023077 pod_ready.go:38] duration metric: took 2.458311492s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:24:03.682125 1023077 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:24:03.682178 1023077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:24:03.697668 1023077 api_server.go:72] duration metric: took 2.671613016s to wait for apiserver process to appear ...
	I0729 14:24:03.697697 1023077 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:24:03.697720 1023077 api_server.go:253] Checking apiserver healthz at https://192.168.50.133:8443/healthz ...
	I0729 14:24:03.701895 1023077 api_server.go:279] https://192.168.50.133:8443/healthz returned 200:
	ok
	I0729 14:24:03.702808 1023077 api_server.go:141] control plane version: v1.30.3
	I0729 14:24:03.702828 1023077 api_server.go:131] duration metric: took 5.124335ms to wait for apiserver health ...
	I0729 14:24:03.702836 1023077 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:24:03.885573 1023077 system_pods.go:59] 6 kube-system pods found
	I0729 14:24:03.885610 1023077 system_pods.go:61] "coredns-7db6d8ff4d-5pskj" [2464169b-59c6-4285-a694-b80fa182e201] Running
	I0729 14:24:03.885617 1023077 system_pods.go:61] "etcd-pause-414966" [d9179f9e-aafe-4b1f-80b1-dd18a3af76d1] Running
	I0729 14:24:03.885620 1023077 system_pods.go:61] "kube-apiserver-pause-414966" [c763bb96-2d45-48c7-a710-7bf74b3c731e] Running
	I0729 14:24:03.885625 1023077 system_pods.go:61] "kube-controller-manager-pause-414966" [c112b4ee-82c6-4099-a728-c9698c7dd8df] Running
	I0729 14:24:03.885629 1023077 system_pods.go:61] "kube-proxy-2dhx5" [f848393f-5676-41d7-b9ba-1959514af9da] Running
	I0729 14:24:03.885636 1023077 system_pods.go:61] "kube-scheduler-pause-414966" [9cd2428e-4c42-4153-84e1-683103b5640f] Running
	I0729 14:24:03.885643 1023077 system_pods.go:74] duration metric: took 182.802434ms to wait for pod list to return data ...
	I0729 14:24:03.885653 1023077 default_sa.go:34] waiting for default service account to be created ...
	I0729 14:24:04.082476 1023077 default_sa.go:45] found service account: "default"
	I0729 14:24:04.082511 1023077 default_sa.go:55] duration metric: took 196.849805ms for default service account to be created ...
	I0729 14:24:04.082524 1023077 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 14:24:04.286667 1023077 system_pods.go:86] 6 kube-system pods found
	I0729 14:24:04.286706 1023077 system_pods.go:89] "coredns-7db6d8ff4d-5pskj" [2464169b-59c6-4285-a694-b80fa182e201] Running
	I0729 14:24:04.286716 1023077 system_pods.go:89] "etcd-pause-414966" [d9179f9e-aafe-4b1f-80b1-dd18a3af76d1] Running
	I0729 14:24:04.286723 1023077 system_pods.go:89] "kube-apiserver-pause-414966" [c763bb96-2d45-48c7-a710-7bf74b3c731e] Running
	I0729 14:24:04.286730 1023077 system_pods.go:89] "kube-controller-manager-pause-414966" [c112b4ee-82c6-4099-a728-c9698c7dd8df] Running
	I0729 14:24:04.286736 1023077 system_pods.go:89] "kube-proxy-2dhx5" [f848393f-5676-41d7-b9ba-1959514af9da] Running
	I0729 14:24:04.286742 1023077 system_pods.go:89] "kube-scheduler-pause-414966" [9cd2428e-4c42-4153-84e1-683103b5640f] Running
	I0729 14:24:04.286751 1023077 system_pods.go:126] duration metric: took 204.220016ms to wait for k8s-apps to be running ...
	I0729 14:24:04.286764 1023077 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 14:24:04.286825 1023077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:24:04.303981 1023077 system_svc.go:56] duration metric: took 17.193271ms WaitForService to wait for kubelet
	I0729 14:24:04.304025 1023077 kubeadm.go:582] duration metric: took 3.277972968s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:24:04.304052 1023077 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:24:04.483661 1023077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:24:04.483689 1023077 node_conditions.go:123] node cpu capacity is 2
	I0729 14:24:04.483702 1023077 node_conditions.go:105] duration metric: took 179.644209ms to run NodePressure ...
	I0729 14:24:04.483714 1023077 start.go:241] waiting for startup goroutines ...
	I0729 14:24:04.483720 1023077 start.go:246] waiting for cluster config update ...
	I0729 14:24:04.483726 1023077 start.go:255] writing updated cluster config ...
	I0729 14:24:04.484003 1023077 ssh_runner.go:195] Run: rm -f paused
	I0729 14:24:04.546634 1023077 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 14:24:04.548956 1023077 out.go:177] * Done! kubectl is now configured to use "pause-414966" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 14:24:05 pause-414966 crio[3004]: time="2024-07-29 14:24:05.331386610Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=25f5d228-ed78-45cf-9dbe-b3923da8d2d5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:24:05 pause-414966 crio[3004]: time="2024-07-29 14:24:05.331925777Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:71373f6ff651099e25c762001a374c3b6564582ffc269c23d29884b443476491,PodSandboxId:a5e006c90fadb3e498cd57f70e3edebe58d91eca9c20950bfb5828914af6f76d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722263025742246828,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2dhx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f848393f-5676-41d7-b9ba-1959514af9da,},Annotations:map[string]string{io.kubernetes.container.hash: bba2d262,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c8cefdf54165fa3a9c48868996d2ab3faed80f94192b71ea745f169ab2b3b5,PodSandboxId:219e2305f437bb171ac3c0a0f3b74223746d86635ce9c0b7e6a6e3af42395abd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722263025736181495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5pskj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2464169b-59c6-4285-a694-b80fa182e201,},Annotations:map[string]string{io.kubernetes.container.hash: dc4ad01c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7594cadd3acbd9143c9605a3d5654dd84b0a0cf1cee9fc13919ab5f533ac63,PodSandboxId:7cf54881f6b05148b6b04380af0cf5588782bf1bbf96095103de5305cbf7dcef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722263020879515462,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58713d53fe
7d58950825c03b9966eee2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5629108c22c2b4e30cc45188dc4ccfaccaccc60b4039556acacefd5e8e680205,PodSandboxId:f080ba05af40fb69a73617adcb4d5ec5ff085b51de2842e3beaf99f6cce17e36,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722263020868685933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e44ac6c9087dd4b2a935089311a5831f,},Annotations:map[string]
string{io.kubernetes.container.hash: bbd2c012,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e63c1e8adea006c949983c0415f1680439a22ab7e7eef80fe28736e138c0bbfe,PodSandboxId:43fb3731c30340d38930d633ac662ced02ae5218bf18b81dd40064b0d1dcc644,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722263020883588283,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae8aee0c0e3f12fd9587843c4103aeff,},Annotations:ma
p[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa261328b16bcdd6b05d5ec620d854f6c4a2d081197c554c8e9feb9ac7bc632,PodSandboxId:20ac66bd7fe786d7335411012a28b23a7c362a6a17f72638b5c5776cd7706c8d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722263020849816798,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6186af066b5a6468d9db65e175ff32e,},Annotations:map[string]string{io
.kubernetes.container.hash: bcee8383,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035d53d316c3b2862b6d13703802b4be52b5d9820739c3b42bbaf79c8ba91f16,PodSandboxId:219e2305f437bb171ac3c0a0f3b74223746d86635ce9c0b7e6a6e3af42395abd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722263008866581913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5pskj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2464169b-59c6-4285-a694-b80fa182e201,},Annotations:map[string]string{io.kubernetes.container.hash: dc4a
d01c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c1298e8b84ad3b0261c3dfa53b6a2d6f814335046ead4ebebfdb2d14fba959c,PodSandboxId:5a6383a54d1e77ded8ab01b30dd057a67a5e0c26578a8f32c94e020ef228fca6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722263006566680784,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-2dhx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f848393f-5676-41d7-b9ba-1959514af9da,},Annotations:map[string]string{io.kubernetes.container.hash: bba2d262,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8135771c948561a1046c0e9adac1481fe77937def69d866e233bfb24215094ee,PodSandboxId:c931f01508c51215de6eff1d671c54e346716bbce3bf6fc3e445b238241eede9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722263006509165892,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-414966,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: e44ac6c9087dd4b2a935089311a5831f,},Annotations:map[string]string{io.kubernetes.container.hash: bbd2c012,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692a1cc699d0a690cd53786a86df00cfd6bcff1dcfd05e18a9f530ef5c585ce8,PodSandboxId:995dca5661b21350aa54a603da882962764c0918f734d3e8a9497c016f0367ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722263006440075888,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-414966,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: d6186af066b5a6468d9db65e175ff32e,},Annotations:map[string]string{io.kubernetes.container.hash: bcee8383,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f29c0422b10cd894aa147915e7eb22a41e8be365adb664295b301163d79cd6fd,PodSandboxId:a935867510ef6d7f6d089131f3fe5230af0b9c9fcc91ede1144e2d4427a262ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722263006397891697,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 58713d53fe7d58950825c03b9966eee2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4363a555ed2549ec21bd47edcd8dfe6d69560aa54510cdca5e9f15a1472466,PodSandboxId:a682bc94e6ebc40d37cae79f208e03bace0fb84833f89ec30e9630d0994644c6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722263006288983798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ae8aee0c0e3f12fd9587843c4103aeff,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=25f5d228-ed78-45cf-9dbe-b3923da8d2d5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:24:05 pause-414966 crio[3004]: time="2024-07-29 14:24:05.383834940Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f59b3480-0f15-4a99-95e0-56a88975176d name=/runtime.v1.RuntimeService/Version
	Jul 29 14:24:05 pause-414966 crio[3004]: time="2024-07-29 14:24:05.383941214Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f59b3480-0f15-4a99-95e0-56a88975176d name=/runtime.v1.RuntimeService/Version
	Jul 29 14:24:05 pause-414966 crio[3004]: time="2024-07-29 14:24:05.386335491Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d62fd048-358c-4c0d-9677-26e0d63acc91 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:24:05 pause-414966 crio[3004]: time="2024-07-29 14:24:05.387316021Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722263045387278982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d62fd048-358c-4c0d-9677-26e0d63acc91 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:24:05 pause-414966 crio[3004]: time="2024-07-29 14:24:05.388169797Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=35f0d188-959c-4edf-b47d-471565d8dce5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:24:05 pause-414966 crio[3004]: time="2024-07-29 14:24:05.388245168Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=35f0d188-959c-4edf-b47d-471565d8dce5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:24:05 pause-414966 crio[3004]: time="2024-07-29 14:24:05.389001632Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:71373f6ff651099e25c762001a374c3b6564582ffc269c23d29884b443476491,PodSandboxId:a5e006c90fadb3e498cd57f70e3edebe58d91eca9c20950bfb5828914af6f76d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722263025742246828,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2dhx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f848393f-5676-41d7-b9ba-1959514af9da,},Annotations:map[string]string{io.kubernetes.container.hash: bba2d262,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c8cefdf54165fa3a9c48868996d2ab3faed80f94192b71ea745f169ab2b3b5,PodSandboxId:219e2305f437bb171ac3c0a0f3b74223746d86635ce9c0b7e6a6e3af42395abd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722263025736181495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5pskj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2464169b-59c6-4285-a694-b80fa182e201,},Annotations:map[string]string{io.kubernetes.container.hash: dc4ad01c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7594cadd3acbd9143c9605a3d5654dd84b0a0cf1cee9fc13919ab5f533ac63,PodSandboxId:7cf54881f6b05148b6b04380af0cf5588782bf1bbf96095103de5305cbf7dcef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722263020879515462,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58713d53fe
7d58950825c03b9966eee2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5629108c22c2b4e30cc45188dc4ccfaccaccc60b4039556acacefd5e8e680205,PodSandboxId:f080ba05af40fb69a73617adcb4d5ec5ff085b51de2842e3beaf99f6cce17e36,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722263020868685933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e44ac6c9087dd4b2a935089311a5831f,},Annotations:map[string]
string{io.kubernetes.container.hash: bbd2c012,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e63c1e8adea006c949983c0415f1680439a22ab7e7eef80fe28736e138c0bbfe,PodSandboxId:43fb3731c30340d38930d633ac662ced02ae5218bf18b81dd40064b0d1dcc644,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722263020883588283,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae8aee0c0e3f12fd9587843c4103aeff,},Annotations:ma
p[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa261328b16bcdd6b05d5ec620d854f6c4a2d081197c554c8e9feb9ac7bc632,PodSandboxId:20ac66bd7fe786d7335411012a28b23a7c362a6a17f72638b5c5776cd7706c8d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722263020849816798,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6186af066b5a6468d9db65e175ff32e,},Annotations:map[string]string{io
.kubernetes.container.hash: bcee8383,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035d53d316c3b2862b6d13703802b4be52b5d9820739c3b42bbaf79c8ba91f16,PodSandboxId:219e2305f437bb171ac3c0a0f3b74223746d86635ce9c0b7e6a6e3af42395abd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722263008866581913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5pskj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2464169b-59c6-4285-a694-b80fa182e201,},Annotations:map[string]string{io.kubernetes.container.hash: dc4a
d01c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c1298e8b84ad3b0261c3dfa53b6a2d6f814335046ead4ebebfdb2d14fba959c,PodSandboxId:5a6383a54d1e77ded8ab01b30dd057a67a5e0c26578a8f32c94e020ef228fca6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722263006566680784,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-2dhx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f848393f-5676-41d7-b9ba-1959514af9da,},Annotations:map[string]string{io.kubernetes.container.hash: bba2d262,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8135771c948561a1046c0e9adac1481fe77937def69d866e233bfb24215094ee,PodSandboxId:c931f01508c51215de6eff1d671c54e346716bbce3bf6fc3e445b238241eede9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722263006509165892,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-414966,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: e44ac6c9087dd4b2a935089311a5831f,},Annotations:map[string]string{io.kubernetes.container.hash: bbd2c012,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692a1cc699d0a690cd53786a86df00cfd6bcff1dcfd05e18a9f530ef5c585ce8,PodSandboxId:995dca5661b21350aa54a603da882962764c0918f734d3e8a9497c016f0367ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722263006440075888,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-414966,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: d6186af066b5a6468d9db65e175ff32e,},Annotations:map[string]string{io.kubernetes.container.hash: bcee8383,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f29c0422b10cd894aa147915e7eb22a41e8be365adb664295b301163d79cd6fd,PodSandboxId:a935867510ef6d7f6d089131f3fe5230af0b9c9fcc91ede1144e2d4427a262ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722263006397891697,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 58713d53fe7d58950825c03b9966eee2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4363a555ed2549ec21bd47edcd8dfe6d69560aa54510cdca5e9f15a1472466,PodSandboxId:a682bc94e6ebc40d37cae79f208e03bace0fb84833f89ec30e9630d0994644c6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722263006288983798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ae8aee0c0e3f12fd9587843c4103aeff,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=35f0d188-959c-4edf-b47d-471565d8dce5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:24:05 pause-414966 crio[3004]: time="2024-07-29 14:24:05.444933068Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fe932edf-1724-450a-8470-1bfca4e49a85 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:24:05 pause-414966 crio[3004]: time="2024-07-29 14:24:05.445042259Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fe932edf-1724-450a-8470-1bfca4e49a85 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:24:05 pause-414966 crio[3004]: time="2024-07-29 14:24:05.446500354Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64be782d-9673-49c1-8a96-7e97dcafa7da name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:24:05 pause-414966 crio[3004]: time="2024-07-29 14:24:05.447361669Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722263045447323399,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64be782d-9673-49c1-8a96-7e97dcafa7da name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:24:05 pause-414966 crio[3004]: time="2024-07-29 14:24:05.450337539Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4aa4db16-992e-4e20-bd76-170687ab90b8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:24:05 pause-414966 crio[3004]: time="2024-07-29 14:24:05.450412006Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4aa4db16-992e-4e20-bd76-170687ab90b8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:24:05 pause-414966 crio[3004]: time="2024-07-29 14:24:05.451024809Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:71373f6ff651099e25c762001a374c3b6564582ffc269c23d29884b443476491,PodSandboxId:a5e006c90fadb3e498cd57f70e3edebe58d91eca9c20950bfb5828914af6f76d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722263025742246828,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2dhx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f848393f-5676-41d7-b9ba-1959514af9da,},Annotations:map[string]string{io.kubernetes.container.hash: bba2d262,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c8cefdf54165fa3a9c48868996d2ab3faed80f94192b71ea745f169ab2b3b5,PodSandboxId:219e2305f437bb171ac3c0a0f3b74223746d86635ce9c0b7e6a6e3af42395abd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722263025736181495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5pskj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2464169b-59c6-4285-a694-b80fa182e201,},Annotations:map[string]string{io.kubernetes.container.hash: dc4ad01c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7594cadd3acbd9143c9605a3d5654dd84b0a0cf1cee9fc13919ab5f533ac63,PodSandboxId:7cf54881f6b05148b6b04380af0cf5588782bf1bbf96095103de5305cbf7dcef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722263020879515462,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58713d53fe
7d58950825c03b9966eee2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5629108c22c2b4e30cc45188dc4ccfaccaccc60b4039556acacefd5e8e680205,PodSandboxId:f080ba05af40fb69a73617adcb4d5ec5ff085b51de2842e3beaf99f6cce17e36,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722263020868685933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e44ac6c9087dd4b2a935089311a5831f,},Annotations:map[string]
string{io.kubernetes.container.hash: bbd2c012,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e63c1e8adea006c949983c0415f1680439a22ab7e7eef80fe28736e138c0bbfe,PodSandboxId:43fb3731c30340d38930d633ac662ced02ae5218bf18b81dd40064b0d1dcc644,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722263020883588283,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae8aee0c0e3f12fd9587843c4103aeff,},Annotations:ma
p[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa261328b16bcdd6b05d5ec620d854f6c4a2d081197c554c8e9feb9ac7bc632,PodSandboxId:20ac66bd7fe786d7335411012a28b23a7c362a6a17f72638b5c5776cd7706c8d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722263020849816798,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6186af066b5a6468d9db65e175ff32e,},Annotations:map[string]string{io
.kubernetes.container.hash: bcee8383,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035d53d316c3b2862b6d13703802b4be52b5d9820739c3b42bbaf79c8ba91f16,PodSandboxId:219e2305f437bb171ac3c0a0f3b74223746d86635ce9c0b7e6a6e3af42395abd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722263008866581913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5pskj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2464169b-59c6-4285-a694-b80fa182e201,},Annotations:map[string]string{io.kubernetes.container.hash: dc4a
d01c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c1298e8b84ad3b0261c3dfa53b6a2d6f814335046ead4ebebfdb2d14fba959c,PodSandboxId:5a6383a54d1e77ded8ab01b30dd057a67a5e0c26578a8f32c94e020ef228fca6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722263006566680784,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-2dhx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f848393f-5676-41d7-b9ba-1959514af9da,},Annotations:map[string]string{io.kubernetes.container.hash: bba2d262,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8135771c948561a1046c0e9adac1481fe77937def69d866e233bfb24215094ee,PodSandboxId:c931f01508c51215de6eff1d671c54e346716bbce3bf6fc3e445b238241eede9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722263006509165892,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-414966,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: e44ac6c9087dd4b2a935089311a5831f,},Annotations:map[string]string{io.kubernetes.container.hash: bbd2c012,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692a1cc699d0a690cd53786a86df00cfd6bcff1dcfd05e18a9f530ef5c585ce8,PodSandboxId:995dca5661b21350aa54a603da882962764c0918f734d3e8a9497c016f0367ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722263006440075888,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-414966,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: d6186af066b5a6468d9db65e175ff32e,},Annotations:map[string]string{io.kubernetes.container.hash: bcee8383,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f29c0422b10cd894aa147915e7eb22a41e8be365adb664295b301163d79cd6fd,PodSandboxId:a935867510ef6d7f6d089131f3fe5230af0b9c9fcc91ede1144e2d4427a262ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722263006397891697,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 58713d53fe7d58950825c03b9966eee2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4363a555ed2549ec21bd47edcd8dfe6d69560aa54510cdca5e9f15a1472466,PodSandboxId:a682bc94e6ebc40d37cae79f208e03bace0fb84833f89ec30e9630d0994644c6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722263006288983798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ae8aee0c0e3f12fd9587843c4103aeff,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4aa4db16-992e-4e20-bd76-170687ab90b8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:24:05 pause-414966 crio[3004]: time="2024-07-29 14:24:05.509027227Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d55ac6d7-e5ba-4cde-b238-f4efc8f833a7 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:24:05 pause-414966 crio[3004]: time="2024-07-29 14:24:05.509402221Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d55ac6d7-e5ba-4cde-b238-f4efc8f833a7 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:24:05 pause-414966 crio[3004]: time="2024-07-29 14:24:05.510952931Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=26b7e9fd-5e1e-4842-868d-775764406560 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:24:05 pause-414966 crio[3004]: time="2024-07-29 14:24:05.511437836Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722263045511412687,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=26b7e9fd-5e1e-4842-868d-775764406560 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:24:05 pause-414966 crio[3004]: time="2024-07-29 14:24:05.512296704Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=14c57c40-22fa-4bcb-b305-9898fb71a34f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:24:05 pause-414966 crio[3004]: time="2024-07-29 14:24:05.512430318Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=14c57c40-22fa-4bcb-b305-9898fb71a34f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:24:05 pause-414966 crio[3004]: time="2024-07-29 14:24:05.512813154Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:71373f6ff651099e25c762001a374c3b6564582ffc269c23d29884b443476491,PodSandboxId:a5e006c90fadb3e498cd57f70e3edebe58d91eca9c20950bfb5828914af6f76d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722263025742246828,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2dhx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f848393f-5676-41d7-b9ba-1959514af9da,},Annotations:map[string]string{io.kubernetes.container.hash: bba2d262,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c8cefdf54165fa3a9c48868996d2ab3faed80f94192b71ea745f169ab2b3b5,PodSandboxId:219e2305f437bb171ac3c0a0f3b74223746d86635ce9c0b7e6a6e3af42395abd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722263025736181495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5pskj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2464169b-59c6-4285-a694-b80fa182e201,},Annotations:map[string]string{io.kubernetes.container.hash: dc4ad01c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7594cadd3acbd9143c9605a3d5654dd84b0a0cf1cee9fc13919ab5f533ac63,PodSandboxId:7cf54881f6b05148b6b04380af0cf5588782bf1bbf96095103de5305cbf7dcef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722263020879515462,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58713d53fe
7d58950825c03b9966eee2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5629108c22c2b4e30cc45188dc4ccfaccaccc60b4039556acacefd5e8e680205,PodSandboxId:f080ba05af40fb69a73617adcb4d5ec5ff085b51de2842e3beaf99f6cce17e36,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722263020868685933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e44ac6c9087dd4b2a935089311a5831f,},Annotations:map[string]
string{io.kubernetes.container.hash: bbd2c012,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e63c1e8adea006c949983c0415f1680439a22ab7e7eef80fe28736e138c0bbfe,PodSandboxId:43fb3731c30340d38930d633ac662ced02ae5218bf18b81dd40064b0d1dcc644,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722263020883588283,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae8aee0c0e3f12fd9587843c4103aeff,},Annotations:ma
p[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa261328b16bcdd6b05d5ec620d854f6c4a2d081197c554c8e9feb9ac7bc632,PodSandboxId:20ac66bd7fe786d7335411012a28b23a7c362a6a17f72638b5c5776cd7706c8d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722263020849816798,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6186af066b5a6468d9db65e175ff32e,},Annotations:map[string]string{io
.kubernetes.container.hash: bcee8383,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035d53d316c3b2862b6d13703802b4be52b5d9820739c3b42bbaf79c8ba91f16,PodSandboxId:219e2305f437bb171ac3c0a0f3b74223746d86635ce9c0b7e6a6e3af42395abd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722263008866581913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5pskj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2464169b-59c6-4285-a694-b80fa182e201,},Annotations:map[string]string{io.kubernetes.container.hash: dc4a
d01c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c1298e8b84ad3b0261c3dfa53b6a2d6f814335046ead4ebebfdb2d14fba959c,PodSandboxId:5a6383a54d1e77ded8ab01b30dd057a67a5e0c26578a8f32c94e020ef228fca6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722263006566680784,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-2dhx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f848393f-5676-41d7-b9ba-1959514af9da,},Annotations:map[string]string{io.kubernetes.container.hash: bba2d262,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8135771c948561a1046c0e9adac1481fe77937def69d866e233bfb24215094ee,PodSandboxId:c931f01508c51215de6eff1d671c54e346716bbce3bf6fc3e445b238241eede9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722263006509165892,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-414966,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: e44ac6c9087dd4b2a935089311a5831f,},Annotations:map[string]string{io.kubernetes.container.hash: bbd2c012,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692a1cc699d0a690cd53786a86df00cfd6bcff1dcfd05e18a9f530ef5c585ce8,PodSandboxId:995dca5661b21350aa54a603da882962764c0918f734d3e8a9497c016f0367ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722263006440075888,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-414966,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: d6186af066b5a6468d9db65e175ff32e,},Annotations:map[string]string{io.kubernetes.container.hash: bcee8383,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f29c0422b10cd894aa147915e7eb22a41e8be365adb664295b301163d79cd6fd,PodSandboxId:a935867510ef6d7f6d089131f3fe5230af0b9c9fcc91ede1144e2d4427a262ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722263006397891697,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 58713d53fe7d58950825c03b9966eee2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4363a555ed2549ec21bd47edcd8dfe6d69560aa54510cdca5e9f15a1472466,PodSandboxId:a682bc94e6ebc40d37cae79f208e03bace0fb84833f89ec30e9630d0994644c6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722263006288983798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ae8aee0c0e3f12fd9587843c4103aeff,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=14c57c40-22fa-4bcb-b305-9898fb71a34f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:24:05 pause-414966 crio[3004]: time="2024-07-29 14:24:05.517567840Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=10a5f405-3caf-47ef-a8ed-97a65d717195 name=/runtime.v1.RuntimeService/Status
	Jul 29 14:24:05 pause-414966 crio[3004]: time="2024-07-29 14:24:05.517750489Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=10a5f405-3caf-47ef-a8ed-97a65d717195 name=/runtime.v1.RuntimeService/Status
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	71373f6ff6510       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   19 seconds ago      Running             kube-proxy                2                   a5e006c90fadb       kube-proxy-2dhx5
	56c8cefdf5416       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 seconds ago      Running             coredns                   2                   219e2305f437b       coredns-7db6d8ff4d-5pskj
	e63c1e8adea00       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   24 seconds ago      Running             kube-controller-manager   2                   43fb3731c3034       kube-controller-manager-pause-414966
	4b7594cadd3ac       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   24 seconds ago      Running             kube-scheduler            2                   7cf54881f6b05       kube-scheduler-pause-414966
	5629108c22c2b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   24 seconds ago      Running             etcd                      2                   f080ba05af40f       etcd-pause-414966
	cfa261328b16b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   24 seconds ago      Running             kube-apiserver            2                   20ac66bd7fe78       kube-apiserver-pause-414966
	035d53d316c3b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   36 seconds ago      Exited              coredns                   1                   219e2305f437b       coredns-7db6d8ff4d-5pskj
	8c1298e8b84ad       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   39 seconds ago      Exited              kube-proxy                1                   5a6383a54d1e7       kube-proxy-2dhx5
	8135771c94856       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   39 seconds ago      Exited              etcd                      1                   c931f01508c51       etcd-pause-414966
	692a1cc699d0a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   39 seconds ago      Exited              kube-apiserver            1                   995dca5661b21       kube-apiserver-pause-414966
	f29c0422b10cd       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   39 seconds ago      Exited              kube-scheduler            1                   a935867510ef6       kube-scheduler-pause-414966
	1b4363a555ed2       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   39 seconds ago      Exited              kube-controller-manager   1                   a682bc94e6ebc       kube-controller-manager-pause-414966
	
	
	==> coredns [035d53d316c3b2862b6d13703802b4be52b5d9820739c3b42bbaf79c8ba91f16] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:40705 - 54386 "HINFO IN 7183347586325243228.5998467049896104713. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.654483181s
	
	
	==> coredns [56c8cefdf54165fa3a9c48868996d2ab3faed80f94192b71ea745f169ab2b3b5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:34032 - 1873 "HINFO IN 6959458377755625769.4473733508205659263. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008494837s
	
	
	==> describe nodes <==
	Name:               pause-414966
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-414966
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=pause-414966
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T14_22_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 14:22:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-414966
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 14:24:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 14:23:44 +0000   Mon, 29 Jul 2024 14:22:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 14:23:44 +0000   Mon, 29 Jul 2024 14:22:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 14:23:44 +0000   Mon, 29 Jul 2024 14:22:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 14:23:44 +0000   Mon, 29 Jul 2024 14:22:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.133
	  Hostname:    pause-414966
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 38e4ca4e96464f05b8b461e155716fcb
	  System UUID:                38e4ca4e-9646-4f05-b8b4-61e155716fcb
	  Boot ID:                    35273879-f3ba-43eb-8b3c-d89cb8b4a4db
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-5pskj                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     91s
	  kube-system                 etcd-pause-414966                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         105s
	  kube-system                 kube-apiserver-pause-414966             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-controller-manager-pause-414966    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-proxy-2dhx5                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-scheduler-pause-414966             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 90s                  kube-proxy       
	  Normal  Starting                 19s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  105s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  105s (x2 over 105s)  kubelet          Node pause-414966 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s (x2 over 105s)  kubelet          Node pause-414966 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s (x2 over 105s)  kubelet          Node pause-414966 status is now: NodeHasSufficientPID
	  Normal  Starting                 105s                 kubelet          Starting kubelet.
	  Normal  NodeReady                104s                 kubelet          Node pause-414966 status is now: NodeReady
	  Normal  RegisteredNode           92s                  node-controller  Node pause-414966 event: Registered Node pause-414966 in Controller
	  Normal  Starting                 25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)    kubelet          Node pause-414966 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)    kubelet          Node pause-414966 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)    kubelet          Node pause-414966 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                   node-controller  Node pause-414966 event: Registered Node pause-414966 in Controller
	
	
	==> dmesg <==
	[Jul29 14:22] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.066117] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075068] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.197157] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.141013] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.300364] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.333629] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +0.062397] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.869332] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +1.180104] kauditd_printk_skb: 82 callbacks suppressed
	[  +4.869592] systemd-fstab-generator[1270]: Ignoring "noauto" option for root device
	[  +2.930241] kauditd_printk_skb: 33 callbacks suppressed
	[ +10.991740] systemd-fstab-generator[1492]: Ignoring "noauto" option for root device
	[ +11.481494] kauditd_printk_skb: 88 callbacks suppressed
	[Jul29 14:23] systemd-fstab-generator[2374]: Ignoring "noauto" option for root device
	[  +0.150342] systemd-fstab-generator[2386]: Ignoring "noauto" option for root device
	[  +0.189314] systemd-fstab-generator[2400]: Ignoring "noauto" option for root device
	[  +0.248779] systemd-fstab-generator[2488]: Ignoring "noauto" option for root device
	[  +0.901925] systemd-fstab-generator[2862]: Ignoring "noauto" option for root device
	[  +1.176563] systemd-fstab-generator[3158]: Ignoring "noauto" option for root device
	[ +12.017254] systemd-fstab-generator[3628]: Ignoring "noauto" option for root device
	[  +0.084583] kauditd_printk_skb: 243 callbacks suppressed
	[  +5.535235] kauditd_printk_skb: 40 callbacks suppressed
	[ +11.236402] kauditd_printk_skb: 2 callbacks suppressed
	[Jul29 14:24] systemd-fstab-generator[4065]: Ignoring "noauto" option for root device
	
	
	==> etcd [5629108c22c2b4e30cc45188dc4ccfaccaccc60b4039556acacefd5e8e680205] <==
	{"level":"info","ts":"2024-07-29T14:23:41.271612Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T14:23:41.27164Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T14:23:41.27192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cc3c8f81f42b93 switched to configuration voters=(706005828648577939)"}
	{"level":"info","ts":"2024-07-29T14:23:41.272005Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f95334f23140be1c","local-member-id":"9cc3c8f81f42b93","added-peer-id":"9cc3c8f81f42b93","added-peer-peer-urls":["https://192.168.50.133:2380"]}
	{"level":"info","ts":"2024-07-29T14:23:41.272181Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f95334f23140be1c","local-member-id":"9cc3c8f81f42b93","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:23:41.272234Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:23:41.297457Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T14:23:41.297802Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9cc3c8f81f42b93","initial-advertise-peer-urls":["https://192.168.50.133:2380"],"listen-peer-urls":["https://192.168.50.133:2380"],"advertise-client-urls":["https://192.168.50.133:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.133:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T14:23:41.300133Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T14:23:41.300262Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.133:2380"}
	{"level":"info","ts":"2024-07-29T14:23:41.300291Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.133:2380"}
	{"level":"info","ts":"2024-07-29T14:23:43.117545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cc3c8f81f42b93 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T14:23:43.117607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cc3c8f81f42b93 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T14:23:43.117707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cc3c8f81f42b93 received MsgPreVoteResp from 9cc3c8f81f42b93 at term 2"}
	{"level":"info","ts":"2024-07-29T14:23:43.117724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cc3c8f81f42b93 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T14:23:43.117731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cc3c8f81f42b93 received MsgVoteResp from 9cc3c8f81f42b93 at term 3"}
	{"level":"info","ts":"2024-07-29T14:23:43.117742Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cc3c8f81f42b93 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T14:23:43.117753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9cc3c8f81f42b93 elected leader 9cc3c8f81f42b93 at term 3"}
	{"level":"info","ts":"2024-07-29T14:23:43.124267Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T14:23:43.124216Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"9cc3c8f81f42b93","local-member-attributes":"{Name:pause-414966 ClientURLs:[https://192.168.50.133:2379]}","request-path":"/0/members/9cc3c8f81f42b93/attributes","cluster-id":"f95334f23140be1c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T14:23:43.125423Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T14:23:43.125845Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T14:23:43.125902Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T14:23:43.126944Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.133:2379"}
	{"level":"info","ts":"2024-07-29T14:23:43.128467Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [8135771c948561a1046c0e9adac1481fe77937def69d866e233bfb24215094ee] <==
	{"level":"warn","ts":"2024-07-29T14:23:27.068256Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-07-29T14:23:27.068374Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.50.133:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.50.133:2380","--initial-cluster=pause-414966=https://192.168.50.133:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.50.133:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.50.133:2380","--name=pause-414966","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trus
ted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-07-29T14:23:27.068781Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-07-29T14:23:27.068845Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-07-29T14:23:27.068858Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.50.133:2380"]}
	{"level":"info","ts":"2024-07-29T14:23:27.068896Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T14:23:27.069854Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.133:2379"]}
	{"level":"info","ts":"2024-07-29T14:23:27.070015Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-414966","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.50.133:2380"],"listen-peer-urls":["https://192.168.50.133:2380"],"advertise-client-urls":["https://192.168.50.133:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.133:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cl
uster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-07-29T14:23:27.079687Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"9.340642ms"}
	{"level":"info","ts":"2024-07-29T14:23:27.094996Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-29T14:23:27.10368Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"f95334f23140be1c","local-member-id":"9cc3c8f81f42b93","commit-index":427}
	{"level":"info","ts":"2024-07-29T14:23:27.103846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cc3c8f81f42b93 switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-29T14:23:27.103909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cc3c8f81f42b93 became follower at term 2"}
	{"level":"info","ts":"2024-07-29T14:23:27.103946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 9cc3c8f81f42b93 [peers: [], term: 2, commit: 427, applied: 0, lastindex: 427, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-29T14:23:27.125208Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-29T14:23:27.147764Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":407}
	
	
	==> kernel <==
	 14:24:05 up 2 min,  0 users,  load average: 1.34, 0.51, 0.18
	Linux pause-414966 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [692a1cc699d0a690cd53786a86df00cfd6bcff1dcfd05e18a9f530ef5c585ce8] <==
	
	
	==> kube-apiserver [cfa261328b16bcdd6b05d5ec620d854f6c4a2d081197c554c8e9feb9ac7bc632] <==
	I0729 14:23:44.592874       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 14:23:44.594191       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 14:23:44.596254       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 14:23:44.596341       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 14:23:44.596398       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 14:23:44.596420       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 14:23:44.596791       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 14:23:44.598760       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 14:23:44.598814       1 policy_source.go:224] refreshing policies
	I0729 14:23:44.600453       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 14:23:44.600488       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 14:23:44.600555       1 aggregator.go:165] initial CRD sync complete...
	I0729 14:23:44.600594       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 14:23:44.600616       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 14:23:44.600638       1 cache.go:39] Caches are synced for autoregister controller
	E0729 14:23:44.604302       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 14:23:44.627746       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 14:23:45.401875       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 14:23:46.297390       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 14:23:46.316722       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 14:23:46.360387       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 14:23:46.392674       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 14:23:46.400409       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 14:23:57.029764       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 14:23:57.059261       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [1b4363a555ed2549ec21bd47edcd8dfe6d69560aa54510cdca5e9f15a1472466] <==
	
	
	==> kube-controller-manager [e63c1e8adea006c949983c0415f1680439a22ab7e7eef80fe28736e138c0bbfe] <==
	I0729 14:23:57.044197       1 shared_informer.go:320] Caches are synced for namespace
	I0729 14:23:57.046796       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0729 14:23:57.046914       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="50.323µs"
	I0729 14:23:57.049226       1 shared_informer.go:320] Caches are synced for endpoint
	I0729 14:23:57.049883       1 shared_informer.go:320] Caches are synced for TTL
	I0729 14:23:57.050993       1 shared_informer.go:320] Caches are synced for service account
	I0729 14:23:57.052333       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0729 14:23:57.054191       1 shared_informer.go:320] Caches are synced for taint
	I0729 14:23:57.054298       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0729 14:23:57.054386       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-414966"
	I0729 14:23:57.054437       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 14:23:57.056052       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0729 14:23:57.057208       1 shared_informer.go:320] Caches are synced for cronjob
	I0729 14:23:57.057267       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0729 14:23:57.057631       1 shared_informer.go:320] Caches are synced for crt configmap
	I0729 14:23:57.068706       1 shared_informer.go:320] Caches are synced for GC
	I0729 14:23:57.130773       1 shared_informer.go:320] Caches are synced for HPA
	I0729 14:23:57.173447       1 shared_informer.go:320] Caches are synced for deployment
	I0729 14:23:57.232262       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 14:23:57.234409       1 shared_informer.go:320] Caches are synced for disruption
	I0729 14:23:57.248753       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0729 14:23:57.250190       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 14:23:57.678772       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 14:23:57.697941       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 14:23:57.697970       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [71373f6ff651099e25c762001a374c3b6564582ffc269c23d29884b443476491] <==
	I0729 14:23:45.937949       1 server_linux.go:69] "Using iptables proxy"
	I0729 14:23:45.947555       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.133"]
	I0729 14:23:45.980377       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 14:23:45.980409       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 14:23:45.980423       1 server_linux.go:165] "Using iptables Proxier"
	I0729 14:23:45.982900       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 14:23:45.983207       1 server.go:872] "Version info" version="v1.30.3"
	I0729 14:23:45.983399       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 14:23:45.984516       1 config.go:192] "Starting service config controller"
	I0729 14:23:45.985962       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 14:23:45.984899       1 config.go:101] "Starting endpoint slice config controller"
	I0729 14:23:45.986008       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 14:23:45.985526       1 config.go:319] "Starting node config controller"
	I0729 14:23:45.986016       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 14:23:46.086974       1 shared_informer.go:320] Caches are synced for node config
	I0729 14:23:46.087162       1 shared_informer.go:320] Caches are synced for service config
	I0729 14:23:46.087189       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [8c1298e8b84ad3b0261c3dfa53b6a2d6f814335046ead4ebebfdb2d14fba959c] <==
	
	
	==> kube-scheduler [4b7594cadd3acbd9143c9605a3d5654dd84b0a0cf1cee9fc13919ab5f533ac63] <==
	I0729 14:23:42.029607       1 serving.go:380] Generated self-signed cert in-memory
	W0729 14:23:44.476597       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 14:23:44.476810       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 14:23:44.476915       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 14:23:44.476942       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 14:23:44.531700       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 14:23:44.531854       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 14:23:44.540888       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 14:23:44.541063       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 14:23:44.546574       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 14:23:44.546706       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 14:23:44.646880       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f29c0422b10cd894aa147915e7eb22a41e8be365adb664295b301163d79cd6fd] <==
	
	
	==> kubelet <==
	Jul 29 14:23:40 pause-414966 kubelet[3635]: I0729 14:23:40.541840    3635 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae8aee0c0e3f12fd9587843c4103aeff-ca-certs\") pod \"kube-controller-manager-pause-414966\" (UID: \"ae8aee0c0e3f12fd9587843c4103aeff\") " pod="kube-system/kube-controller-manager-pause-414966"
	Jul 29 14:23:40 pause-414966 kubelet[3635]: E0729 14:23:40.576760    3635 file.go:108] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/etcd.yaml\": /etc/kubernetes/manifests/etcd.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file"
	Jul 29 14:23:40 pause-414966 kubelet[3635]: E0729 14:23:40.642328    3635 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-414966?timeout=10s\": dial tcp 192.168.50.133:8443: connect: connection refused" interval="400ms"
	Jul 29 14:23:40 pause-414966 kubelet[3635]: I0729 14:23:40.738536    3635 kubelet_node_status.go:73] "Attempting to register node" node="pause-414966"
	Jul 29 14:23:40 pause-414966 kubelet[3635]: E0729 14:23:40.739718    3635 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.133:8443: connect: connection refused" node="pause-414966"
	Jul 29 14:23:40 pause-414966 kubelet[3635]: I0729 14:23:40.829318    3635 scope.go:117] "RemoveContainer" containerID="8135771c948561a1046c0e9adac1481fe77937def69d866e233bfb24215094ee"
	Jul 29 14:23:40 pause-414966 kubelet[3635]: I0729 14:23:40.832778    3635 scope.go:117] "RemoveContainer" containerID="692a1cc699d0a690cd53786a86df00cfd6bcff1dcfd05e18a9f530ef5c585ce8"
	Jul 29 14:23:40 pause-414966 kubelet[3635]: I0729 14:23:40.835892    3635 scope.go:117] "RemoveContainer" containerID="1b4363a555ed2549ec21bd47edcd8dfe6d69560aa54510cdca5e9f15a1472466"
	Jul 29 14:23:40 pause-414966 kubelet[3635]: I0729 14:23:40.839279    3635 scope.go:117] "RemoveContainer" containerID="f29c0422b10cd894aa147915e7eb22a41e8be365adb664295b301163d79cd6fd"
	Jul 29 14:23:41 pause-414966 kubelet[3635]: E0729 14:23:41.045071    3635 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-414966?timeout=10s\": dial tcp 192.168.50.133:8443: connect: connection refused" interval="800ms"
	Jul 29 14:23:41 pause-414966 kubelet[3635]: I0729 14:23:41.141989    3635 kubelet_node_status.go:73] "Attempting to register node" node="pause-414966"
	Jul 29 14:23:41 pause-414966 kubelet[3635]: E0729 14:23:41.143905    3635 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.133:8443: connect: connection refused" node="pause-414966"
	Jul 29 14:23:41 pause-414966 kubelet[3635]: I0729 14:23:41.945410    3635 kubelet_node_status.go:73] "Attempting to register node" node="pause-414966"
	Jul 29 14:23:44 pause-414966 kubelet[3635]: I0729 14:23:44.666817    3635 kubelet_node_status.go:112] "Node was previously registered" node="pause-414966"
	Jul 29 14:23:44 pause-414966 kubelet[3635]: I0729 14:23:44.667260    3635 kubelet_node_status.go:76] "Successfully registered node" node="pause-414966"
	Jul 29 14:23:44 pause-414966 kubelet[3635]: I0729 14:23:44.668726    3635 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 14:23:44 pause-414966 kubelet[3635]: I0729 14:23:44.673512    3635 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 14:23:45 pause-414966 kubelet[3635]: I0729 14:23:45.409992    3635 apiserver.go:52] "Watching apiserver"
	Jul 29 14:23:45 pause-414966 kubelet[3635]: I0729 14:23:45.412937    3635 topology_manager.go:215] "Topology Admit Handler" podUID="f848393f-5676-41d7-b9ba-1959514af9da" podNamespace="kube-system" podName="kube-proxy-2dhx5"
	Jul 29 14:23:45 pause-414966 kubelet[3635]: I0729 14:23:45.413082    3635 topology_manager.go:215] "Topology Admit Handler" podUID="2464169b-59c6-4285-a694-b80fa182e201" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5pskj"
	Jul 29 14:23:45 pause-414966 kubelet[3635]: I0729 14:23:45.433700    3635 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 29 14:23:45 pause-414966 kubelet[3635]: I0729 14:23:45.491875    3635 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f848393f-5676-41d7-b9ba-1959514af9da-lib-modules\") pod \"kube-proxy-2dhx5\" (UID: \"f848393f-5676-41d7-b9ba-1959514af9da\") " pod="kube-system/kube-proxy-2dhx5"
	Jul 29 14:23:45 pause-414966 kubelet[3635]: I0729 14:23:45.492001    3635 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f848393f-5676-41d7-b9ba-1959514af9da-xtables-lock\") pod \"kube-proxy-2dhx5\" (UID: \"f848393f-5676-41d7-b9ba-1959514af9da\") " pod="kube-system/kube-proxy-2dhx5"
	Jul 29 14:23:45 pause-414966 kubelet[3635]: I0729 14:23:45.714163    3635 scope.go:117] "RemoveContainer" containerID="035d53d316c3b2862b6d13703802b4be52b5d9820739c3b42bbaf79c8ba91f16"
	Jul 29 14:23:45 pause-414966 kubelet[3635]: I0729 14:23:45.716021    3635 scope.go:117] "RemoveContainer" containerID="8c1298e8b84ad3b0261c3dfa53b6a2d6f814335046ead4ebebfdb2d14fba959c"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-414966 -n pause-414966
helpers_test.go:261: (dbg) Run:  kubectl --context pause-414966 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-414966 -n pause-414966
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-414966 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-414966 logs -n 25: (1.899390085s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-050658       | kubernetes-upgrade-050658 | jenkins | v1.33.1 | 29 Jul 24 14:18 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-764732        | force-systemd-env-764732  | jenkins | v1.33.1 | 29 Jul 24 14:19 UTC | 29 Jul 24 14:19 UTC |
	| start   | -p stopped-upgrade-626874          | minikube                  | jenkins | v1.26.0 | 29 Jul 24 14:19 UTC | 29 Jul 24 14:20 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p offline-crio-715623             | offline-crio-715623       | jenkins | v1.33.1 | 29 Jul 24 14:19 UTC | 29 Jul 24 14:19 UTC |
	| start   | -p running-upgrade-932740          | minikube                  | jenkins | v1.26.0 | 29 Jul 24 14:19 UTC | 29 Jul 24 14:21 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-721916             | NoKubernetes-721916       | jenkins | v1.33.1 | 29 Jul 24 14:19 UTC | 29 Jul 24 14:20 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-626874 stop        | minikube                  | jenkins | v1.26.0 | 29 Jul 24 14:20 UTC | 29 Jul 24 14:20 UTC |
	| start   | -p stopped-upgrade-626874          | stopped-upgrade-626874    | jenkins | v1.33.1 | 29 Jul 24 14:20 UTC | 29 Jul 24 14:21 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-721916             | NoKubernetes-721916       | jenkins | v1.33.1 | 29 Jul 24 14:20 UTC | 29 Jul 24 14:20 UTC |
	| start   | -p NoKubernetes-721916             | NoKubernetes-721916       | jenkins | v1.33.1 | 29 Jul 24 14:20 UTC | 29 Jul 24 14:21 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p running-upgrade-932740          | running-upgrade-932740    | jenkins | v1.33.1 | 29 Jul 24 14:21 UTC | 29 Jul 24 14:22 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-721916 sudo        | NoKubernetes-721916       | jenkins | v1.33.1 | 29 Jul 24 14:21 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-626874          | stopped-upgrade-626874    | jenkins | v1.33.1 | 29 Jul 24 14:21 UTC | 29 Jul 24 14:21 UTC |
	| start   | -p pause-414966 --memory=2048      | pause-414966              | jenkins | v1.33.1 | 29 Jul 24 14:21 UTC | 29 Jul 24 14:23 UTC |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-721916             | NoKubernetes-721916       | jenkins | v1.33.1 | 29 Jul 24 14:21 UTC | 29 Jul 24 14:21 UTC |
	| start   | -p NoKubernetes-721916             | NoKubernetes-721916       | jenkins | v1.33.1 | 29 Jul 24 14:21 UTC | 29 Jul 24 14:22 UTC |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-721916 sudo        | NoKubernetes-721916       | jenkins | v1.33.1 | 29 Jul 24 14:22 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-721916             | NoKubernetes-721916       | jenkins | v1.33.1 | 29 Jul 24 14:22 UTC | 29 Jul 24 14:22 UTC |
	| start   | -p cert-expiration-869983          | cert-expiration-869983    | jenkins | v1.33.1 | 29 Jul 24 14:22 UTC | 29 Jul 24 14:23 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-932740          | running-upgrade-932740    | jenkins | v1.33.1 | 29 Jul 24 14:22 UTC | 29 Jul 24 14:22 UTC |
	| start   | -p force-systemd-flag-956245       | force-systemd-flag-956245 | jenkins | v1.33.1 | 29 Jul 24 14:22 UTC | 29 Jul 24 14:23 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p pause-414966                    | pause-414966              | jenkins | v1.33.1 | 29 Jul 24 14:23 UTC | 29 Jul 24 14:24 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-956245 ssh cat  | force-systemd-flag-956245 | jenkins | v1.33.1 | 29 Jul 24 14:23 UTC | 29 Jul 24 14:23 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-956245       | force-systemd-flag-956245 | jenkins | v1.33.1 | 29 Jul 24 14:23 UTC | 29 Jul 24 14:23 UTC |
	| start   | -p cert-options-442776             | cert-options-442776       | jenkins | v1.33.1 | 29 Jul 24 14:23 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1          |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15      |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost        |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com   |                           |         |         |                     |                     |
	|         | --apiserver-port=8555              |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 14:23:37
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 14:23:37.822288 1023348 out.go:291] Setting OutFile to fd 1 ...
	I0729 14:23:37.822403 1023348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:23:37.822408 1023348 out.go:304] Setting ErrFile to fd 2...
	I0729 14:23:37.822411 1023348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:23:37.822587 1023348 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 14:23:37.823175 1023348 out.go:298] Setting JSON to false
	I0729 14:23:37.824269 1023348 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":14770,"bootTime":1722248248,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 14:23:37.824319 1023348 start.go:139] virtualization: kvm guest
	I0729 14:23:37.827206 1023348 out.go:177] * [cert-options-442776] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 14:23:37.828790 1023348 notify.go:220] Checking for updates...
	I0729 14:23:37.828811 1023348 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 14:23:37.830498 1023348 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 14:23:37.831931 1023348 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:23:37.833487 1023348 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:23:37.835154 1023348 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 14:23:37.836639 1023348 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 14:23:37.838497 1023348 config.go:182] Loaded profile config "cert-expiration-869983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:23:37.838577 1023348 config.go:182] Loaded profile config "kubernetes-upgrade-050658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 14:23:37.838680 1023348 config.go:182] Loaded profile config "pause-414966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:23:37.838763 1023348 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 14:23:37.875734 1023348 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 14:23:37.876958 1023348 start.go:297] selected driver: kvm2
	I0729 14:23:37.876977 1023348 start.go:901] validating driver "kvm2" against <nil>
	I0729 14:23:37.876986 1023348 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 14:23:37.877661 1023348 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:23:37.877722 1023348 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19338-974764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 14:23:37.893415 1023348 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 14:23:37.893452 1023348 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 14:23:37.893673 1023348 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 14:23:37.893722 1023348 cni.go:84] Creating CNI manager for ""
	I0729 14:23:37.893730 1023348 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:23:37.893736 1023348 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 14:23:37.893806 1023348 start.go:340] cluster config:
	{Name:cert-options-442776 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:cert-options-442776 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.
1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0729 14:23:37.893910 1023348 iso.go:125] acquiring lock: {Name:mk2bc72146110e230952d77b90cad2ea8182c9d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:23:37.895818 1023348 out.go:177] * Starting "cert-options-442776" primary control-plane node in "cert-options-442776" cluster
	I0729 14:23:37.897268 1023348 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 14:23:37.897296 1023348 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 14:23:37.897302 1023348 cache.go:56] Caching tarball of preloaded images
	I0729 14:23:37.897385 1023348 preload.go:172] Found /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 14:23:37.897392 1023348 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 14:23:37.897491 1023348 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/cert-options-442776/config.json ...
	I0729 14:23:37.897504 1023348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/cert-options-442776/config.json: {Name:mkb799c6fa34df2f36b22f7ab03cf322b62992b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:23:37.897624 1023348 start.go:360] acquireMachinesLock for cert-options-442776: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 14:23:37.897657 1023348 start.go:364] duration metric: took 17.003µs to acquireMachinesLock for "cert-options-442776"
	I0729 14:23:37.897670 1023348 start.go:93] Provisioning new machine with config: &{Name:cert-options-442776 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.30.3 ClusterName:cert-options-442776 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:23:37.897714 1023348 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 14:23:39.201825 1023077 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 035d53d316c3b2862b6d13703802b4be52b5d9820739c3b42bbaf79c8ba91f16 8c1298e8b84ad3b0261c3dfa53b6a2d6f814335046ead4ebebfdb2d14fba959c 8135771c948561a1046c0e9adac1481fe77937def69d866e233bfb24215094ee 692a1cc699d0a690cd53786a86df00cfd6bcff1dcfd05e18a9f530ef5c585ce8 f29c0422b10cd894aa147915e7eb22a41e8be365adb664295b301163d79cd6fd 1b4363a555ed2549ec21bd47edcd8dfe6d69560aa54510cdca5e9f15a1472466 594d17490a04d5ffe49bbf6c7987154d651e6110bbeea4dacc652c85edf3d360: (9.772261526s)
	I0729 14:23:39.201911 1023077 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 14:23:39.245235 1023077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:23:39.256763 1023077 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Jul 29 14:22 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Jul 29 14:22 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Jul 29 14:22 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Jul 29 14:22 /etc/kubernetes/scheduler.conf
	
	I0729 14:23:39.256841 1023077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:23:39.266710 1023077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:23:39.276692 1023077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:23:39.286281 1023077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:23:39.286355 1023077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:23:39.296449 1023077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:23:39.306079 1023077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:23:39.306155 1023077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:23:39.316781 1023077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:23:39.327238 1023077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:23:39.393042 1023077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:23:40.052024 1023077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:23:40.304129 1023077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:23:40.412862 1023077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:23:40.593265 1023077 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:23:40.593374 1023077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:23:41.093868 1023077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:23:41.125849 1023077 api_server.go:72] duration metric: took 532.585175ms to wait for apiserver process to appear ...
	I0729 14:23:41.125898 1023077 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:23:41.125922 1023077 api_server.go:253] Checking apiserver healthz at https://192.168.50.133:8443/healthz ...
	I0729 14:23:41.126475 1023077 api_server.go:269] stopped: https://192.168.50.133:8443/healthz: Get "https://192.168.50.133:8443/healthz": dial tcp 192.168.50.133:8443: connect: connection refused
	I0729 14:23:37.899397 1023348 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 14:23:37.899527 1023348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:23:37.899559 1023348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:23:37.913532 1023348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46181
	I0729 14:23:37.913972 1023348 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:23:37.914515 1023348 main.go:141] libmachine: Using API Version  1
	I0729 14:23:37.914529 1023348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:23:37.914857 1023348 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:23:37.915032 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetMachineName
	I0729 14:23:37.915159 1023348 main.go:141] libmachine: (cert-options-442776) Calling .DriverName
	I0729 14:23:37.915281 1023348 start.go:159] libmachine.API.Create for "cert-options-442776" (driver="kvm2")
	I0729 14:23:37.915305 1023348 client.go:168] LocalClient.Create starting
	I0729 14:23:37.915338 1023348 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem
	I0729 14:23:37.915370 1023348 main.go:141] libmachine: Decoding PEM data...
	I0729 14:23:37.915385 1023348 main.go:141] libmachine: Parsing certificate...
	I0729 14:23:37.915450 1023348 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem
	I0729 14:23:37.915465 1023348 main.go:141] libmachine: Decoding PEM data...
	I0729 14:23:37.915474 1023348 main.go:141] libmachine: Parsing certificate...
	I0729 14:23:37.915485 1023348 main.go:141] libmachine: Running pre-create checks...
	I0729 14:23:37.915495 1023348 main.go:141] libmachine: (cert-options-442776) Calling .PreCreateCheck
	I0729 14:23:37.915813 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetConfigRaw
	I0729 14:23:37.916200 1023348 main.go:141] libmachine: Creating machine...
	I0729 14:23:37.916206 1023348 main.go:141] libmachine: (cert-options-442776) Calling .Create
	I0729 14:23:37.916358 1023348 main.go:141] libmachine: (cert-options-442776) Creating KVM machine...
	I0729 14:23:37.917525 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found existing default KVM network
	I0729 14:23:37.918995 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:37.918823 1023371 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:10:c4:5a} reservation:<nil>}
	I0729 14:23:37.920151 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:37.920073 1023371 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:83:cd:4b} reservation:<nil>}
	I0729 14:23:37.921373 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:37.921277 1023371 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:17:03:6b} reservation:<nil>}
	I0729 14:23:37.923619 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:37.923503 1023371 network.go:209] skipping subnet 192.168.72.0/24 that is reserved: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 14:23:37.924876 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:37.924804 1023371 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000112c40}
	I0729 14:23:37.924902 1023348 main.go:141] libmachine: (cert-options-442776) DBG | created network xml: 
	I0729 14:23:37.924921 1023348 main.go:141] libmachine: (cert-options-442776) DBG | <network>
	I0729 14:23:37.924937 1023348 main.go:141] libmachine: (cert-options-442776) DBG |   <name>mk-cert-options-442776</name>
	I0729 14:23:37.924948 1023348 main.go:141] libmachine: (cert-options-442776) DBG |   <dns enable='no'/>
	I0729 14:23:37.924955 1023348 main.go:141] libmachine: (cert-options-442776) DBG |   
	I0729 14:23:37.924963 1023348 main.go:141] libmachine: (cert-options-442776) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I0729 14:23:37.924970 1023348 main.go:141] libmachine: (cert-options-442776) DBG |     <dhcp>
	I0729 14:23:37.924978 1023348 main.go:141] libmachine: (cert-options-442776) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I0729 14:23:37.924984 1023348 main.go:141] libmachine: (cert-options-442776) DBG |     </dhcp>
	I0729 14:23:37.924989 1023348 main.go:141] libmachine: (cert-options-442776) DBG |   </ip>
	I0729 14:23:37.924994 1023348 main.go:141] libmachine: (cert-options-442776) DBG |   
	I0729 14:23:37.924998 1023348 main.go:141] libmachine: (cert-options-442776) DBG | </network>
	I0729 14:23:37.925006 1023348 main.go:141] libmachine: (cert-options-442776) DBG | 
	I0729 14:23:37.930739 1023348 main.go:141] libmachine: (cert-options-442776) DBG | trying to create private KVM network mk-cert-options-442776 192.168.83.0/24...
	I0729 14:23:37.998543 1023348 main.go:141] libmachine: (cert-options-442776) DBG | private KVM network mk-cert-options-442776 192.168.83.0/24 created
	I0729 14:23:37.998633 1023348 main.go:141] libmachine: (cert-options-442776) Setting up store path in /home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776 ...
	I0729 14:23:37.998665 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:37.998503 1023371 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:23:37.998683 1023348 main.go:141] libmachine: (cert-options-442776) Building disk image from file:///home/jenkins/minikube-integration/19338-974764/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 14:23:37.998707 1023348 main.go:141] libmachine: (cert-options-442776) Downloading /home/jenkins/minikube-integration/19338-974764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19338-974764/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 14:23:38.260689 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:38.260495 1023371 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776/id_rsa...
	I0729 14:23:38.414988 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:38.414828 1023371 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776/cert-options-442776.rawdisk...
	I0729 14:23:38.415015 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Writing magic tar header
	I0729 14:23:38.415033 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Writing SSH key tar header
	I0729 14:23:38.415044 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:38.414981 1023371 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776 ...
	I0729 14:23:38.415156 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776
	I0729 14:23:38.415180 1023348 main.go:141] libmachine: (cert-options-442776) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776 (perms=drwx------)
	I0729 14:23:38.415191 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube/machines
	I0729 14:23:38.415202 1023348 main.go:141] libmachine: (cert-options-442776) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube/machines (perms=drwxr-xr-x)
	I0729 14:23:38.415214 1023348 main.go:141] libmachine: (cert-options-442776) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube (perms=drwxr-xr-x)
	I0729 14:23:38.415221 1023348 main.go:141] libmachine: (cert-options-442776) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764 (perms=drwxrwxr-x)
	I0729 14:23:38.415234 1023348 main.go:141] libmachine: (cert-options-442776) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 14:23:38.415241 1023348 main.go:141] libmachine: (cert-options-442776) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 14:23:38.415249 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:23:38.415273 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764
	I0729 14:23:38.415281 1023348 main.go:141] libmachine: (cert-options-442776) Creating domain...
	I0729 14:23:38.415288 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 14:23:38.415295 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Checking permissions on dir: /home/jenkins
	I0729 14:23:38.415308 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Checking permissions on dir: /home
	I0729 14:23:38.415318 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Skipping /home - not owner
	I0729 14:23:38.416558 1023348 main.go:141] libmachine: (cert-options-442776) define libvirt domain using xml: 
	I0729 14:23:38.416591 1023348 main.go:141] libmachine: (cert-options-442776) <domain type='kvm'>
	I0729 14:23:38.416600 1023348 main.go:141] libmachine: (cert-options-442776)   <name>cert-options-442776</name>
	I0729 14:23:38.416607 1023348 main.go:141] libmachine: (cert-options-442776)   <memory unit='MiB'>2048</memory>
	I0729 14:23:38.416614 1023348 main.go:141] libmachine: (cert-options-442776)   <vcpu>2</vcpu>
	I0729 14:23:38.416628 1023348 main.go:141] libmachine: (cert-options-442776)   <features>
	I0729 14:23:38.416635 1023348 main.go:141] libmachine: (cert-options-442776)     <acpi/>
	I0729 14:23:38.416640 1023348 main.go:141] libmachine: (cert-options-442776)     <apic/>
	I0729 14:23:38.416646 1023348 main.go:141] libmachine: (cert-options-442776)     <pae/>
	I0729 14:23:38.416651 1023348 main.go:141] libmachine: (cert-options-442776)     
	I0729 14:23:38.416658 1023348 main.go:141] libmachine: (cert-options-442776)   </features>
	I0729 14:23:38.416663 1023348 main.go:141] libmachine: (cert-options-442776)   <cpu mode='host-passthrough'>
	I0729 14:23:38.416669 1023348 main.go:141] libmachine: (cert-options-442776)   
	I0729 14:23:38.416678 1023348 main.go:141] libmachine: (cert-options-442776)   </cpu>
	I0729 14:23:38.416699 1023348 main.go:141] libmachine: (cert-options-442776)   <os>
	I0729 14:23:38.416707 1023348 main.go:141] libmachine: (cert-options-442776)     <type>hvm</type>
	I0729 14:23:38.416712 1023348 main.go:141] libmachine: (cert-options-442776)     <boot dev='cdrom'/>
	I0729 14:23:38.416716 1023348 main.go:141] libmachine: (cert-options-442776)     <boot dev='hd'/>
	I0729 14:23:38.416726 1023348 main.go:141] libmachine: (cert-options-442776)     <bootmenu enable='no'/>
	I0729 14:23:38.416729 1023348 main.go:141] libmachine: (cert-options-442776)   </os>
	I0729 14:23:38.416733 1023348 main.go:141] libmachine: (cert-options-442776)   <devices>
	I0729 14:23:38.416737 1023348 main.go:141] libmachine: (cert-options-442776)     <disk type='file' device='cdrom'>
	I0729 14:23:38.416744 1023348 main.go:141] libmachine: (cert-options-442776)       <source file='/home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776/boot2docker.iso'/>
	I0729 14:23:38.416748 1023348 main.go:141] libmachine: (cert-options-442776)       <target dev='hdc' bus='scsi'/>
	I0729 14:23:38.416753 1023348 main.go:141] libmachine: (cert-options-442776)       <readonly/>
	I0729 14:23:38.416756 1023348 main.go:141] libmachine: (cert-options-442776)     </disk>
	I0729 14:23:38.416761 1023348 main.go:141] libmachine: (cert-options-442776)     <disk type='file' device='disk'>
	I0729 14:23:38.416766 1023348 main.go:141] libmachine: (cert-options-442776)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 14:23:38.416774 1023348 main.go:141] libmachine: (cert-options-442776)       <source file='/home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776/cert-options-442776.rawdisk'/>
	I0729 14:23:38.416781 1023348 main.go:141] libmachine: (cert-options-442776)       <target dev='hda' bus='virtio'/>
	I0729 14:23:38.416785 1023348 main.go:141] libmachine: (cert-options-442776)     </disk>
	I0729 14:23:38.416792 1023348 main.go:141] libmachine: (cert-options-442776)     <interface type='network'>
	I0729 14:23:38.416797 1023348 main.go:141] libmachine: (cert-options-442776)       <source network='mk-cert-options-442776'/>
	I0729 14:23:38.416803 1023348 main.go:141] libmachine: (cert-options-442776)       <model type='virtio'/>
	I0729 14:23:38.416808 1023348 main.go:141] libmachine: (cert-options-442776)     </interface>
	I0729 14:23:38.416814 1023348 main.go:141] libmachine: (cert-options-442776)     <interface type='network'>
	I0729 14:23:38.416819 1023348 main.go:141] libmachine: (cert-options-442776)       <source network='default'/>
	I0729 14:23:38.416822 1023348 main.go:141] libmachine: (cert-options-442776)       <model type='virtio'/>
	I0729 14:23:38.416826 1023348 main.go:141] libmachine: (cert-options-442776)     </interface>
	I0729 14:23:38.416830 1023348 main.go:141] libmachine: (cert-options-442776)     <serial type='pty'>
	I0729 14:23:38.416834 1023348 main.go:141] libmachine: (cert-options-442776)       <target port='0'/>
	I0729 14:23:38.416837 1023348 main.go:141] libmachine: (cert-options-442776)     </serial>
	I0729 14:23:38.416841 1023348 main.go:141] libmachine: (cert-options-442776)     <console type='pty'>
	I0729 14:23:38.416844 1023348 main.go:141] libmachine: (cert-options-442776)       <target type='serial' port='0'/>
	I0729 14:23:38.416857 1023348 main.go:141] libmachine: (cert-options-442776)     </console>
	I0729 14:23:38.416866 1023348 main.go:141] libmachine: (cert-options-442776)     <rng model='virtio'>
	I0729 14:23:38.416871 1023348 main.go:141] libmachine: (cert-options-442776)       <backend model='random'>/dev/random</backend>
	I0729 14:23:38.416874 1023348 main.go:141] libmachine: (cert-options-442776)     </rng>
	I0729 14:23:38.416878 1023348 main.go:141] libmachine: (cert-options-442776)     
	I0729 14:23:38.416881 1023348 main.go:141] libmachine: (cert-options-442776)     
	I0729 14:23:38.416885 1023348 main.go:141] libmachine: (cert-options-442776)   </devices>
	I0729 14:23:38.416888 1023348 main.go:141] libmachine: (cert-options-442776) </domain>
	I0729 14:23:38.416895 1023348 main.go:141] libmachine: (cert-options-442776) 
	I0729 14:23:38.421890 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:36:3e:89 in network default
	I0729 14:23:38.422552 1023348 main.go:141] libmachine: (cert-options-442776) Ensuring networks are active...
	I0729 14:23:38.422576 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:38.423268 1023348 main.go:141] libmachine: (cert-options-442776) Ensuring network default is active
	I0729 14:23:38.423709 1023348 main.go:141] libmachine: (cert-options-442776) Ensuring network mk-cert-options-442776 is active
	I0729 14:23:38.424237 1023348 main.go:141] libmachine: (cert-options-442776) Getting domain xml...
	I0729 14:23:38.424976 1023348 main.go:141] libmachine: (cert-options-442776) Creating domain...
	I0729 14:23:38.745900 1023348 main.go:141] libmachine: (cert-options-442776) Waiting to get IP...
	I0729 14:23:38.746585 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:38.747071 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:38.747103 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:38.747048 1023371 retry.go:31] will retry after 302.81585ms: waiting for machine to come up
	I0729 14:23:39.051632 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:39.052146 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:39.052168 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:39.052089 1023371 retry.go:31] will retry after 318.183671ms: waiting for machine to come up
	I0729 14:23:39.371631 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:39.372289 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:39.372318 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:39.372213 1023371 retry.go:31] will retry after 385.473463ms: waiting for machine to come up
	I0729 14:23:39.758958 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:39.759477 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:39.759501 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:39.759417 1023371 retry.go:31] will retry after 491.502129ms: waiting for machine to come up
	I0729 14:23:40.252171 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:40.252770 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:40.252793 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:40.252701 1023371 retry.go:31] will retry after 648.148309ms: waiting for machine to come up
	I0729 14:23:40.902788 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:40.903274 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:40.903336 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:40.903235 1023371 retry.go:31] will retry after 872.321979ms: waiting for machine to come up
	I0729 14:23:41.777537 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:41.778023 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:41.778041 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:41.778001 1023371 retry.go:31] will retry after 898.968832ms: waiting for machine to come up
	I0729 14:23:42.678598 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:42.678992 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:42.679015 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:42.678932 1023371 retry.go:31] will retry after 1.366079761s: waiting for machine to come up
	I0729 14:23:41.626605 1023077 api_server.go:253] Checking apiserver healthz at https://192.168.50.133:8443/healthz ...
	I0729 14:23:44.461435 1023077 api_server.go:279] https://192.168.50.133:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:23:44.461469 1023077 api_server.go:103] status: https://192.168.50.133:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:23:44.461485 1023077 api_server.go:253] Checking apiserver healthz at https://192.168.50.133:8443/healthz ...
	I0729 14:23:44.491343 1023077 api_server.go:279] https://192.168.50.133:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:23:44.491381 1023077 api_server.go:103] status: https://192.168.50.133:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:23:44.626625 1023077 api_server.go:253] Checking apiserver healthz at https://192.168.50.133:8443/healthz ...
	I0729 14:23:44.631551 1023077 api_server.go:279] https://192.168.50.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:23:44.631594 1023077 api_server.go:103] status: https://192.168.50.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:23:45.126140 1023077 api_server.go:253] Checking apiserver healthz at https://192.168.50.133:8443/healthz ...
	I0729 14:23:45.130809 1023077 api_server.go:279] https://192.168.50.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:23:45.130848 1023077 api_server.go:103] status: https://192.168.50.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:23:45.626525 1023077 api_server.go:253] Checking apiserver healthz at https://192.168.50.133:8443/healthz ...
	I0729 14:23:45.632787 1023077 api_server.go:279] https://192.168.50.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:23:45.632817 1023077 api_server.go:103] status: https://192.168.50.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:23:46.126177 1023077 api_server.go:253] Checking apiserver healthz at https://192.168.50.133:8443/healthz ...
	I0729 14:23:46.132204 1023077 api_server.go:279] https://192.168.50.133:8443/healthz returned 200:
	ok
	I0729 14:23:46.139857 1023077 api_server.go:141] control plane version: v1.30.3
	I0729 14:23:46.139888 1023077 api_server.go:131] duration metric: took 5.013982606s to wait for apiserver health ...
	I0729 14:23:46.139899 1023077 cni.go:84] Creating CNI manager for ""
	I0729 14:23:46.139908 1023077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:23:46.142195 1023077 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:23:46.143970 1023077 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:23:46.155651 1023077 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:23:46.174812 1023077 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:23:46.192007 1023077 system_pods.go:59] 6 kube-system pods found
	I0729 14:23:46.192046 1023077 system_pods.go:61] "coredns-7db6d8ff4d-5pskj" [2464169b-59c6-4285-a694-b80fa182e201] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 14:23:46.192058 1023077 system_pods.go:61] "etcd-pause-414966" [d9179f9e-aafe-4b1f-80b1-dd18a3af76d1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 14:23:46.192067 1023077 system_pods.go:61] "kube-apiserver-pause-414966" [c763bb96-2d45-48c7-a710-7bf74b3c731e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 14:23:46.192076 1023077 system_pods.go:61] "kube-controller-manager-pause-414966" [c112b4ee-82c6-4099-a728-c9698c7dd8df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 14:23:46.192083 1023077 system_pods.go:61] "kube-proxy-2dhx5" [f848393f-5676-41d7-b9ba-1959514af9da] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 14:23:46.192090 1023077 system_pods.go:61] "kube-scheduler-pause-414966" [9cd2428e-4c42-4153-84e1-683103b5640f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 14:23:46.192098 1023077 system_pods.go:74] duration metric: took 17.261835ms to wait for pod list to return data ...
	I0729 14:23:46.192108 1023077 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:23:46.195395 1023077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:23:46.195424 1023077 node_conditions.go:123] node cpu capacity is 2
	I0729 14:23:46.195437 1023077 node_conditions.go:105] duration metric: took 3.32314ms to run NodePressure ...
	I0729 14:23:46.195479 1023077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:23:46.460099 1023077 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 14:23:44.047381 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:44.047952 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:44.047975 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:44.047885 1023371 retry.go:31] will retry after 1.497047858s: waiting for machine to come up
	I0729 14:23:45.547467 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:45.548055 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:45.548080 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:45.548005 1023371 retry.go:31] will retry after 1.432302652s: waiting for machine to come up
	I0729 14:23:46.982374 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:46.982884 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:46.982912 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:46.982807 1023371 retry.go:31] will retry after 2.77354977s: waiting for machine to come up
	I0729 14:23:46.465323 1023077 kubeadm.go:739] kubelet initialised
	I0729 14:23:46.465350 1023077 kubeadm.go:740] duration metric: took 5.216165ms waiting for restarted kubelet to initialise ...
	I0729 14:23:46.465369 1023077 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:23:46.471142 1023077 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-5pskj" in "kube-system" namespace to be "Ready" ...
	I0729 14:23:46.978477 1023077 pod_ready.go:92] pod "coredns-7db6d8ff4d-5pskj" in "kube-system" namespace has status "Ready":"True"
	I0729 14:23:46.978506 1023077 pod_ready.go:81] duration metric: took 507.333647ms for pod "coredns-7db6d8ff4d-5pskj" in "kube-system" namespace to be "Ready" ...
	I0729 14:23:46.978520 1023077 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:23:48.986524 1023077 pod_ready.go:102] pod "etcd-pause-414966" in "kube-system" namespace has status "Ready":"False"
	I0729 14:23:49.757611 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:49.758097 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:49.758115 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:49.758037 1023371 retry.go:31] will retry after 2.40654272s: waiting for machine to come up
	I0729 14:23:52.165935 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:52.166308 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:52.166325 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:52.166263 1023371 retry.go:31] will retry after 3.450920562s: waiting for machine to come up
	I0729 14:23:51.484961 1023077 pod_ready.go:102] pod "etcd-pause-414966" in "kube-system" namespace has status "Ready":"False"
	I0729 14:23:53.485238 1023077 pod_ready.go:102] pod "etcd-pause-414966" in "kube-system" namespace has status "Ready":"False"
	I0729 14:23:55.985411 1023077 pod_ready.go:102] pod "etcd-pause-414966" in "kube-system" namespace has status "Ready":"False"
	I0729 14:23:55.620205 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:23:55.620706 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find current IP address of domain cert-options-442776 in network mk-cert-options-442776
	I0729 14:23:55.620875 1023348 main.go:141] libmachine: (cert-options-442776) DBG | I0729 14:23:55.620696 1023371 retry.go:31] will retry after 5.00068248s: waiting for machine to come up
	I0729 14:23:57.985576 1023077 pod_ready.go:102] pod "etcd-pause-414966" in "kube-system" namespace has status "Ready":"False"
	I0729 14:24:00.487011 1023077 pod_ready.go:92] pod "etcd-pause-414966" in "kube-system" namespace has status "Ready":"True"
	I0729 14:24:00.487040 1023077 pod_ready.go:81] duration metric: took 13.50851267s for pod "etcd-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:00.487052 1023077 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:00.493229 1023077 pod_ready.go:92] pod "kube-apiserver-pause-414966" in "kube-system" namespace has status "Ready":"True"
	I0729 14:24:00.493250 1023077 pod_ready.go:81] duration metric: took 6.191634ms for pod "kube-apiserver-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:00.493260 1023077 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:01.000826 1023077 pod_ready.go:92] pod "kube-controller-manager-pause-414966" in "kube-system" namespace has status "Ready":"True"
	I0729 14:24:01.000866 1023077 pod_ready.go:81] duration metric: took 507.589786ms for pod "kube-controller-manager-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:01.000881 1023077 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2dhx5" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:01.005663 1023077 pod_ready.go:92] pod "kube-proxy-2dhx5" in "kube-system" namespace has status "Ready":"True"
	I0729 14:24:01.005681 1023077 pod_ready.go:81] duration metric: took 4.792457ms for pod "kube-proxy-2dhx5" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:01.005689 1023077 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:01.009721 1023077 pod_ready.go:92] pod "kube-scheduler-pause-414966" in "kube-system" namespace has status "Ready":"True"
	I0729 14:24:01.009739 1023077 pod_ready.go:81] duration metric: took 4.042575ms for pod "kube-scheduler-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:01.009745 1023077 pod_ready.go:38] duration metric: took 14.544364047s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:24:01.009762 1023077 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 14:24:01.024748 1023077 ops.go:34] apiserver oom_adj: -16
	I0729 14:24:01.024771 1023077 kubeadm.go:597] duration metric: took 31.679282749s to restartPrimaryControlPlane
	I0729 14:24:01.024780 1023077 kubeadm.go:394] duration metric: took 31.835205899s to StartCluster
	I0729 14:24:01.024802 1023077 settings.go:142] acquiring lock: {Name:mke61e73d7bb1a5bd9c2f4c9e9bba0a07b199ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:24:01.024890 1023077 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:24:01.025786 1023077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:24:01.026023 1023077 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:24:01.026100 1023077 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 14:24:01.026267 1023077 config.go:182] Loaded profile config "pause-414966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:24:01.027793 1023077 out.go:177] * Verifying Kubernetes components...
	I0729 14:24:01.027796 1023077 out.go:177] * Enabled addons: 
	I0729 14:24:01.029092 1023077 addons.go:510] duration metric: took 2.995656ms for enable addons: enabled=[]
	I0729 14:24:01.029151 1023077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:24:01.200858 1023077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:24:01.220552 1023077 node_ready.go:35] waiting up to 6m0s for node "pause-414966" to be "Ready" ...
	I0729 14:24:01.223753 1023077 node_ready.go:49] node "pause-414966" has status "Ready":"True"
	I0729 14:24:01.223775 1023077 node_ready.go:38] duration metric: took 3.188285ms for node "pause-414966" to be "Ready" ...
	I0729 14:24:01.223785 1023077 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:24:01.285918 1023077 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5pskj" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:00.623602 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:00.624104 1023348 main.go:141] libmachine: (cert-options-442776) Found IP for machine: 192.168.83.83
	I0729 14:24:00.624127 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has current primary IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:00.624136 1023348 main.go:141] libmachine: (cert-options-442776) Reserving static IP address...
	I0729 14:24:00.624551 1023348 main.go:141] libmachine: (cert-options-442776) DBG | unable to find host DHCP lease matching {name: "cert-options-442776", mac: "52:54:00:1e:8c:65", ip: "192.168.83.83"} in network mk-cert-options-442776
	I0729 14:24:00.697535 1023348 main.go:141] libmachine: (cert-options-442776) Reserved static IP address: 192.168.83.83
	I0729 14:24:00.697559 1023348 main.go:141] libmachine: (cert-options-442776) Waiting for SSH to be available...
	I0729 14:24:00.697568 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Getting to WaitForSSH function...
	I0729 14:24:00.700095 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:00.700539 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:00.700565 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:00.700747 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Using SSH client type: external
	I0729 14:24:00.700761 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776/id_rsa (-rw-------)
	I0729 14:24:00.700815 1023348 main.go:141] libmachine: (cert-options-442776) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.83 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:24:00.700834 1023348 main.go:141] libmachine: (cert-options-442776) DBG | About to run SSH command:
	I0729 14:24:00.700847 1023348 main.go:141] libmachine: (cert-options-442776) DBG | exit 0
	I0729 14:24:00.828338 1023348 main.go:141] libmachine: (cert-options-442776) DBG | SSH cmd err, output: <nil>: 
	I0729 14:24:00.828657 1023348 main.go:141] libmachine: (cert-options-442776) KVM machine creation complete!
	I0729 14:24:00.829045 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetConfigRaw
	I0729 14:24:00.829546 1023348 main.go:141] libmachine: (cert-options-442776) Calling .DriverName
	I0729 14:24:00.829751 1023348 main.go:141] libmachine: (cert-options-442776) Calling .DriverName
	I0729 14:24:00.829880 1023348 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 14:24:00.829898 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetState
	I0729 14:24:00.831126 1023348 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 14:24:00.831135 1023348 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 14:24:00.831139 1023348 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 14:24:00.831144 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHHostname
	I0729 14:24:00.833711 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:00.834080 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:00.834096 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:00.834251 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHPort
	I0729 14:24:00.834442 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:00.834637 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:00.834778 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHUsername
	I0729 14:24:00.834916 1023348 main.go:141] libmachine: Using SSH client type: native
	I0729 14:24:00.835102 1023348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.83 22 <nil> <nil>}
	I0729 14:24:00.835107 1023348 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 14:24:00.947882 1023348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:24:00.947904 1023348 main.go:141] libmachine: Detecting the provisioner...
	I0729 14:24:00.947911 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHHostname
	I0729 14:24:00.950885 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:00.951240 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:00.951266 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:00.951470 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHPort
	I0729 14:24:00.951681 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:00.951850 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:00.951950 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHUsername
	I0729 14:24:00.952124 1023348 main.go:141] libmachine: Using SSH client type: native
	I0729 14:24:00.952284 1023348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.83 22 <nil> <nil>}
	I0729 14:24:00.952300 1023348 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 14:24:01.068979 1023348 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 14:24:01.069093 1023348 main.go:141] libmachine: found compatible host: buildroot
	I0729 14:24:01.069101 1023348 main.go:141] libmachine: Provisioning with buildroot...
	I0729 14:24:01.069111 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetMachineName
	I0729 14:24:01.069390 1023348 buildroot.go:166] provisioning hostname "cert-options-442776"
	I0729 14:24:01.069410 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetMachineName
	I0729 14:24:01.069621 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHHostname
	I0729 14:24:01.072101 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.072585 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:01.072620 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.072719 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHPort
	I0729 14:24:01.072899 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:01.073057 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:01.073192 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHUsername
	I0729 14:24:01.073364 1023348 main.go:141] libmachine: Using SSH client type: native
	I0729 14:24:01.073538 1023348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.83 22 <nil> <nil>}
	I0729 14:24:01.073552 1023348 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-options-442776 && echo "cert-options-442776" | sudo tee /etc/hostname
	I0729 14:24:01.198724 1023348 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-options-442776
	
	I0729 14:24:01.198743 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHHostname
	I0729 14:24:01.201864 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.202220 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:01.202248 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.202424 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHPort
	I0729 14:24:01.202633 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:01.202789 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:01.202914 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHUsername
	I0729 14:24:01.203053 1023348 main.go:141] libmachine: Using SSH client type: native
	I0729 14:24:01.203218 1023348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.83 22 <nil> <nil>}
	I0729 14:24:01.203229 1023348 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-options-442776' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-442776/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-options-442776' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:24:01.325604 1023348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:24:01.325633 1023348 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:24:01.325673 1023348 buildroot.go:174] setting up certificates
	I0729 14:24:01.325689 1023348 provision.go:84] configureAuth start
	I0729 14:24:01.325703 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetMachineName
	I0729 14:24:01.326036 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetIP
	I0729 14:24:01.329100 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.329477 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:01.329506 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.329668 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHHostname
	I0729 14:24:01.332091 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.332457 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:01.332478 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.332633 1023348 provision.go:143] copyHostCerts
	I0729 14:24:01.332680 1023348 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:24:01.332686 1023348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:24:01.332747 1023348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:24:01.332847 1023348 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:24:01.332850 1023348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:24:01.332878 1023348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:24:01.332936 1023348 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:24:01.332938 1023348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:24:01.332957 1023348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:24:01.333016 1023348 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.cert-options-442776 san=[127.0.0.1 192.168.83.83 cert-options-442776 localhost minikube]
	I0729 14:24:01.514202 1023348 provision.go:177] copyRemoteCerts
	I0729 14:24:01.514255 1023348 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:24:01.514296 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHHostname
	I0729 14:24:01.517086 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.517415 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:01.517435 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.517605 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHPort
	I0729 14:24:01.517785 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:01.517939 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHUsername
	I0729 14:24:01.518061 1023348 sshutil.go:53] new ssh client: &{IP:192.168.83.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776/id_rsa Username:docker}
	I0729 14:24:01.602784 1023348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:24:01.627806 1023348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 14:24:01.652155 1023348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:24:01.675741 1023348 provision.go:87] duration metric: took 350.039931ms to configureAuth
	I0729 14:24:01.675760 1023348 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:24:01.675954 1023348 config.go:182] Loaded profile config "cert-options-442776": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:24:01.676034 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHHostname
	I0729 14:24:01.678845 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.679154 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:01.679173 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.679330 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHPort
	I0729 14:24:01.679532 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:01.679704 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:01.679894 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHUsername
	I0729 14:24:01.680060 1023348 main.go:141] libmachine: Using SSH client type: native
	I0729 14:24:01.680227 1023348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.83 22 <nil> <nil>}
	I0729 14:24:01.680237 1023348 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:24:01.952519 1023348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:24:01.952546 1023348 main.go:141] libmachine: Checking connection to Docker...
	I0729 14:24:01.952556 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetURL
	I0729 14:24:01.953971 1023348 main.go:141] libmachine: (cert-options-442776) DBG | Using libvirt version 6000000
	I0729 14:24:01.956386 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.956742 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:01.956765 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.956945 1023348 main.go:141] libmachine: Docker is up and running!
	I0729 14:24:01.956955 1023348 main.go:141] libmachine: Reticulating splines...
	I0729 14:24:01.956961 1023348 client.go:171] duration metric: took 24.041650515s to LocalClient.Create
	I0729 14:24:01.956987 1023348 start.go:167] duration metric: took 24.041706148s to libmachine.API.Create "cert-options-442776"
	I0729 14:24:01.956995 1023348 start.go:293] postStartSetup for "cert-options-442776" (driver="kvm2")
	I0729 14:24:01.957006 1023348 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:24:01.957025 1023348 main.go:141] libmachine: (cert-options-442776) Calling .DriverName
	I0729 14:24:01.957281 1023348 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:24:01.957302 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHHostname
	I0729 14:24:01.959388 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.959706 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:01.959728 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:01.959853 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHPort
	I0729 14:24:01.960051 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:01.960222 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHUsername
	I0729 14:24:01.960364 1023348 sshutil.go:53] new ssh client: &{IP:192.168.83.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776/id_rsa Username:docker}
	I0729 14:24:02.046694 1023348 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:24:02.051009 1023348 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:24:02.051029 1023348 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:24:02.051098 1023348 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:24:02.051180 1023348 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:24:02.051284 1023348 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:24:02.060434 1023348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:24:02.087037 1023348 start.go:296] duration metric: took 130.027636ms for postStartSetup
	I0729 14:24:02.087075 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetConfigRaw
	I0729 14:24:02.087648 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetIP
	I0729 14:24:02.090309 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:02.090710 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:02.090744 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:02.091018 1023348 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/cert-options-442776/config.json ...
	I0729 14:24:02.091200 1023348 start.go:128] duration metric: took 24.193478851s to createHost
	I0729 14:24:02.091215 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHHostname
	I0729 14:24:02.093357 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:02.093739 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:02.093760 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:02.093912 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHPort
	I0729 14:24:02.094086 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:02.094272 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:02.094409 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHUsername
	I0729 14:24:02.094544 1023348 main.go:141] libmachine: Using SSH client type: native
	I0729 14:24:02.094721 1023348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.83 22 <nil> <nil>}
	I0729 14:24:02.094726 1023348 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:24:02.209146 1023348 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263042.185538996
	
	I0729 14:24:02.209163 1023348 fix.go:216] guest clock: 1722263042.185538996
	I0729 14:24:02.209169 1023348 fix.go:229] Guest: 2024-07-29 14:24:02.185538996 +0000 UTC Remote: 2024-07-29 14:24:02.091205534 +0000 UTC m=+24.306330108 (delta=94.333462ms)
	I0729 14:24:02.209193 1023348 fix.go:200] guest clock delta is within tolerance: 94.333462ms
	I0729 14:24:02.209204 1023348 start.go:83] releasing machines lock for "cert-options-442776", held for 24.311540445s
	I0729 14:24:02.209225 1023348 main.go:141] libmachine: (cert-options-442776) Calling .DriverName
	I0729 14:24:02.209544 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetIP
	I0729 14:24:02.212442 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:02.212866 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:02.212899 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:02.213037 1023348 main.go:141] libmachine: (cert-options-442776) Calling .DriverName
	I0729 14:24:02.213540 1023348 main.go:141] libmachine: (cert-options-442776) Calling .DriverName
	I0729 14:24:02.213762 1023348 main.go:141] libmachine: (cert-options-442776) Calling .DriverName
	I0729 14:24:02.213834 1023348 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:24:02.213896 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHHostname
	I0729 14:24:02.214037 1023348 ssh_runner.go:195] Run: cat /version.json
	I0729 14:24:02.214056 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHHostname
	I0729 14:24:02.216549 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:02.216933 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:02.216958 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:02.216975 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:02.217206 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHPort
	I0729 14:24:02.217393 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:02.217480 1023348 main.go:141] libmachine: (cert-options-442776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:8c:65", ip: ""} in network mk-cert-options-442776: {Iface:virbr4 ExpiryTime:2024-07-29 15:23:51 +0000 UTC Type:0 Mac:52:54:00:1e:8c:65 Iaid: IPaddr:192.168.83.83 Prefix:24 Hostname:cert-options-442776 Clientid:01:52:54:00:1e:8c:65}
	I0729 14:24:02.217495 1023348 main.go:141] libmachine: (cert-options-442776) DBG | domain cert-options-442776 has defined IP address 192.168.83.83 and MAC address 52:54:00:1e:8c:65 in network mk-cert-options-442776
	I0729 14:24:02.217663 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHPort
	I0729 14:24:02.217687 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHUsername
	I0729 14:24:02.217859 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHKeyPath
	I0729 14:24:02.217886 1023348 sshutil.go:53] new ssh client: &{IP:192.168.83.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776/id_rsa Username:docker}
	I0729 14:24:02.218004 1023348 main.go:141] libmachine: (cert-options-442776) Calling .GetSSHUsername
	I0729 14:24:02.218137 1023348 sshutil.go:53] new ssh client: &{IP:192.168.83.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/cert-options-442776/id_rsa Username:docker}
	I0729 14:24:02.297098 1023348 ssh_runner.go:195] Run: systemctl --version
	I0729 14:24:02.320345 1023348 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:24:02.486089 1023348 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:24:02.492007 1023348 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:24:02.492071 1023348 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:24:02.508741 1023348 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:24:02.508764 1023348 start.go:495] detecting cgroup driver to use...
	I0729 14:24:02.508819 1023348 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:24:02.525797 1023348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:24:02.540718 1023348 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:24:02.540760 1023348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:24:02.555144 1023348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:24:02.570325 1023348 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:24:02.689020 1023348 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:24:02.842816 1023348 docker.go:233] disabling docker service ...
	I0729 14:24:02.842896 1023348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:24:02.858242 1023348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:24:02.871154 1023348 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:24:02.995569 1023348 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:24:03.126554 1023348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:24:03.140550 1023348 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:24:03.159288 1023348 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 14:24:03.159352 1023348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:24:03.170399 1023348 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:24:03.170456 1023348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:24:03.182170 1023348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:24:03.193053 1023348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:24:03.203248 1023348 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:24:03.213990 1023348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:24:03.223942 1023348 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:24:03.240960 1023348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:24:03.252797 1023348 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:24:03.263592 1023348 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:24:03.263631 1023348 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:24:03.278209 1023348 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:24:03.288262 1023348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:24:03.417216 1023348 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:24:03.572810 1023348 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:24:03.572873 1023348 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:24:03.578191 1023348 start.go:563] Will wait 60s for crictl version
	I0729 14:24:03.578239 1023348 ssh_runner.go:195] Run: which crictl
	I0729 14:24:03.581925 1023348 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:24:03.631978 1023348 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:24:03.632063 1023348 ssh_runner.go:195] Run: crio --version
	I0729 14:24:03.660364 1023348 ssh_runner.go:195] Run: crio --version
	I0729 14:24:03.689600 1023348 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 14:24:01.683072 1023077 pod_ready.go:92] pod "coredns-7db6d8ff4d-5pskj" in "kube-system" namespace has status "Ready":"True"
	I0729 14:24:01.683099 1023077 pod_ready.go:81] duration metric: took 397.149379ms for pod "coredns-7db6d8ff4d-5pskj" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:01.683115 1023077 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:02.082866 1023077 pod_ready.go:92] pod "etcd-pause-414966" in "kube-system" namespace has status "Ready":"True"
	I0729 14:24:02.082895 1023077 pod_ready.go:81] duration metric: took 399.771982ms for pod "etcd-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:02.082905 1023077 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:02.482854 1023077 pod_ready.go:92] pod "kube-apiserver-pause-414966" in "kube-system" namespace has status "Ready":"True"
	I0729 14:24:02.482886 1023077 pod_ready.go:81] duration metric: took 399.974762ms for pod "kube-apiserver-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:02.482897 1023077 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:02.883313 1023077 pod_ready.go:92] pod "kube-controller-manager-pause-414966" in "kube-system" namespace has status "Ready":"True"
	I0729 14:24:02.883339 1023077 pod_ready.go:81] duration metric: took 400.434931ms for pod "kube-controller-manager-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:02.883352 1023077 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2dhx5" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:03.282556 1023077 pod_ready.go:92] pod "kube-proxy-2dhx5" in "kube-system" namespace has status "Ready":"True"
	I0729 14:24:03.282590 1023077 pod_ready.go:81] duration metric: took 399.229037ms for pod "kube-proxy-2dhx5" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:03.282606 1023077 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:03.682074 1023077 pod_ready.go:92] pod "kube-scheduler-pause-414966" in "kube-system" namespace has status "Ready":"True"
	I0729 14:24:03.682101 1023077 pod_ready.go:81] duration metric: took 399.487813ms for pod "kube-scheduler-pause-414966" in "kube-system" namespace to be "Ready" ...
	I0729 14:24:03.682109 1023077 pod_ready.go:38] duration metric: took 2.458311492s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:24:03.682125 1023077 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:24:03.682178 1023077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:24:03.697668 1023077 api_server.go:72] duration metric: took 2.671613016s to wait for apiserver process to appear ...
	I0729 14:24:03.697697 1023077 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:24:03.697720 1023077 api_server.go:253] Checking apiserver healthz at https://192.168.50.133:8443/healthz ...
	I0729 14:24:03.701895 1023077 api_server.go:279] https://192.168.50.133:8443/healthz returned 200:
	ok
	I0729 14:24:03.702808 1023077 api_server.go:141] control plane version: v1.30.3
	I0729 14:24:03.702828 1023077 api_server.go:131] duration metric: took 5.124335ms to wait for apiserver health ...
	I0729 14:24:03.702836 1023077 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:24:03.885573 1023077 system_pods.go:59] 6 kube-system pods found
	I0729 14:24:03.885610 1023077 system_pods.go:61] "coredns-7db6d8ff4d-5pskj" [2464169b-59c6-4285-a694-b80fa182e201] Running
	I0729 14:24:03.885617 1023077 system_pods.go:61] "etcd-pause-414966" [d9179f9e-aafe-4b1f-80b1-dd18a3af76d1] Running
	I0729 14:24:03.885620 1023077 system_pods.go:61] "kube-apiserver-pause-414966" [c763bb96-2d45-48c7-a710-7bf74b3c731e] Running
	I0729 14:24:03.885625 1023077 system_pods.go:61] "kube-controller-manager-pause-414966" [c112b4ee-82c6-4099-a728-c9698c7dd8df] Running
	I0729 14:24:03.885629 1023077 system_pods.go:61] "kube-proxy-2dhx5" [f848393f-5676-41d7-b9ba-1959514af9da] Running
	I0729 14:24:03.885636 1023077 system_pods.go:61] "kube-scheduler-pause-414966" [9cd2428e-4c42-4153-84e1-683103b5640f] Running
	I0729 14:24:03.885643 1023077 system_pods.go:74] duration metric: took 182.802434ms to wait for pod list to return data ...
	I0729 14:24:03.885653 1023077 default_sa.go:34] waiting for default service account to be created ...
	I0729 14:24:04.082476 1023077 default_sa.go:45] found service account: "default"
	I0729 14:24:04.082511 1023077 default_sa.go:55] duration metric: took 196.849805ms for default service account to be created ...
	I0729 14:24:04.082524 1023077 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 14:24:04.286667 1023077 system_pods.go:86] 6 kube-system pods found
	I0729 14:24:04.286706 1023077 system_pods.go:89] "coredns-7db6d8ff4d-5pskj" [2464169b-59c6-4285-a694-b80fa182e201] Running
	I0729 14:24:04.286716 1023077 system_pods.go:89] "etcd-pause-414966" [d9179f9e-aafe-4b1f-80b1-dd18a3af76d1] Running
	I0729 14:24:04.286723 1023077 system_pods.go:89] "kube-apiserver-pause-414966" [c763bb96-2d45-48c7-a710-7bf74b3c731e] Running
	I0729 14:24:04.286730 1023077 system_pods.go:89] "kube-controller-manager-pause-414966" [c112b4ee-82c6-4099-a728-c9698c7dd8df] Running
	I0729 14:24:04.286736 1023077 system_pods.go:89] "kube-proxy-2dhx5" [f848393f-5676-41d7-b9ba-1959514af9da] Running
	I0729 14:24:04.286742 1023077 system_pods.go:89] "kube-scheduler-pause-414966" [9cd2428e-4c42-4153-84e1-683103b5640f] Running
	I0729 14:24:04.286751 1023077 system_pods.go:126] duration metric: took 204.220016ms to wait for k8s-apps to be running ...
	I0729 14:24:04.286764 1023077 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 14:24:04.286825 1023077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:24:04.303981 1023077 system_svc.go:56] duration metric: took 17.193271ms WaitForService to wait for kubelet
	I0729 14:24:04.304025 1023077 kubeadm.go:582] duration metric: took 3.277972968s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:24:04.304052 1023077 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:24:04.483661 1023077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:24:04.483689 1023077 node_conditions.go:123] node cpu capacity is 2
	I0729 14:24:04.483702 1023077 node_conditions.go:105] duration metric: took 179.644209ms to run NodePressure ...
	I0729 14:24:04.483714 1023077 start.go:241] waiting for startup goroutines ...
	I0729 14:24:04.483720 1023077 start.go:246] waiting for cluster config update ...
	I0729 14:24:04.483726 1023077 start.go:255] writing updated cluster config ...
	I0729 14:24:04.484003 1023077 ssh_runner.go:195] Run: rm -f paused
	I0729 14:24:04.546634 1023077 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 14:24:04.548956 1023077 out.go:177] * Done! kubectl is now configured to use "pause-414966" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 14:24:07 pause-414966 crio[3004]: time="2024-07-29 14:24:07.652660621Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:219e2305f437bb171ac3c0a0f3b74223746d86635ce9c0b7e6a6e3af42395abd,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-5pskj,Uid:2464169b-59c6-4285-a694-b80fa182e201,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722263008559727435,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-5pskj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2464169b-59c6-4285-a694-b80fa182e201,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T14:22:34.228238645Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:43fb3731c30340d38930d633ac662ced02ae5218bf18b81dd40064b0d1dcc644,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-414966,Uid:ae8aee0c0e3f12fd9587843c4103aeff,Namespace:kub
e-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722263008278812325,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae8aee0c0e3f12fd9587843c4103aeff,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ae8aee0c0e3f12fd9587843c4103aeff,kubernetes.io/config.seen: 2024-07-29T14:22:20.492816941Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a5e006c90fadb3e498cd57f70e3edebe58d91eca9c20950bfb5828914af6f76d,Metadata:&PodSandboxMetadata{Name:kube-proxy-2dhx5,Uid:f848393f-5676-41d7-b9ba-1959514af9da,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722263008209825329,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-2dhx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f848393f-5676-41d7-b
9ba-1959514af9da,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T14:22:34.055957964Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:20ac66bd7fe786d7335411012a28b23a7c362a6a17f72638b5c5776cd7706c8d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-414966,Uid:d6186af066b5a6468d9db65e175ff32e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722263008167488934,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6186af066b5a6468d9db65e175ff32e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.133:8443,kubernetes.io/config.hash: d6186af066b5a6468d9db65e175ff32e,kubernetes.io/config.seen: 2024-07-29T14:22:20.492815757Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox
{Id:f080ba05af40fb69a73617adcb4d5ec5ff085b51de2842e3beaf99f6cce17e36,Metadata:&PodSandboxMetadata{Name:etcd-pause-414966,Uid:e44ac6c9087dd4b2a935089311a5831f,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722263008082814699,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e44ac6c9087dd4b2a935089311a5831f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.133:2379,kubernetes.io/config.hash: e44ac6c9087dd4b2a935089311a5831f,kubernetes.io/config.seen: 2024-07-29T14:22:20.492810732Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7cf54881f6b05148b6b04380af0cf5588782bf1bbf96095103de5305cbf7dcef,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-414966,Uid:58713d53fe7d58950825c03b9966eee2,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722263008051205486,Lab
els:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58713d53fe7d58950825c03b9966eee2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 58713d53fe7d58950825c03b9966eee2,kubernetes.io/config.seen: 2024-07-29T14:22:20.492817993Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5a6383a54d1e77ded8ab01b30dd057a67a5e0c26578a8f32c94e020ef228fca6,Metadata:&PodSandboxMetadata{Name:kube-proxy-2dhx5,Uid:f848393f-5676-41d7-b9ba-1959514af9da,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722263005963773534,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-2dhx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f848393f-5676-41d7-b9ba-1959514af9da,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-07-29T14:22:34.055957964Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:995dca5661b21350aa54a603da882962764c0918f734d3e8a9497c016f0367ad,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-414966,Uid:d6186af066b5a6468d9db65e175ff32e,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722263005936504501,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6186af066b5a6468d9db65e175ff32e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.133:8443,kubernetes.io/config.hash: d6186af066b5a6468d9db65e175ff32e,kubernetes.io/config.seen: 2024-07-29T14:22:20.492815757Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c931f01508c51215de6eff1d671c54e346716bbce3bf6fc3e445b238241eede9,Metadata:&PodSand
boxMetadata{Name:etcd-pause-414966,Uid:e44ac6c9087dd4b2a935089311a5831f,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722263005934268772,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e44ac6c9087dd4b2a935089311a5831f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.133:2379,kubernetes.io/config.hash: e44ac6c9087dd4b2a935089311a5831f,kubernetes.io/config.seen: 2024-07-29T14:22:20.492810732Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a935867510ef6d7f6d089131f3fe5230af0b9c9fcc91ede1144e2d4427a262ff,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-414966,Uid:58713d53fe7d58950825c03b9966eee2,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722263005903422369,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: PO
D,io.kubernetes.pod.name: kube-scheduler-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58713d53fe7d58950825c03b9966eee2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 58713d53fe7d58950825c03b9966eee2,kubernetes.io/config.seen: 2024-07-29T14:22:20.492817993Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a682bc94e6ebc40d37cae79f208e03bace0fb84833f89ec30e9630d0994644c6,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-414966,Uid:ae8aee0c0e3f12fd9587843c4103aeff,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722263005903060619,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae8aee0c0e3f12fd9587843c4103aeff,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ae8aee0c0e3f12fd9587843c4103aeff,kubernetes
.io/config.seen: 2024-07-29T14:22:20.492816941Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fe3a380a225c566a0c05e31e79fb4022a58586ec621538941effba8ade2f2a84,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-h2tml,Uid:06cce12c-474c-4efc-a2e5-0d67be47bdf5,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722262954492630633,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-h2tml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06cce12c-474c-4efc-a2e5-0d67be47bdf5,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T14:22:34.176218053Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=7e0f101d-66dd-44a6-a9a9-2c1bb7c0b41e name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 14:24:07 pause-414966 crio[3004]: time="2024-07-29 14:24:07.653920912Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=327f6650-da20-4046-83f0-de926f61e604 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:24:07 pause-414966 crio[3004]: time="2024-07-29 14:24:07.654016480Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=327f6650-da20-4046-83f0-de926f61e604 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:24:07 pause-414966 crio[3004]: time="2024-07-29 14:24:07.654399914Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:71373f6ff651099e25c762001a374c3b6564582ffc269c23d29884b443476491,PodSandboxId:a5e006c90fadb3e498cd57f70e3edebe58d91eca9c20950bfb5828914af6f76d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722263025742246828,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2dhx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f848393f-5676-41d7-b9ba-1959514af9da,},Annotations:map[string]string{io.kubernetes.container.hash: bba2d262,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c8cefdf54165fa3a9c48868996d2ab3faed80f94192b71ea745f169ab2b3b5,PodSandboxId:219e2305f437bb171ac3c0a0f3b74223746d86635ce9c0b7e6a6e3af42395abd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722263025736181495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5pskj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2464169b-59c6-4285-a694-b80fa182e201,},Annotations:map[string]string{io.kubernetes.container.hash: dc4ad01c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7594cadd3acbd9143c9605a3d5654dd84b0a0cf1cee9fc13919ab5f533ac63,PodSandboxId:7cf54881f6b05148b6b04380af0cf5588782bf1bbf96095103de5305cbf7dcef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722263020879515462,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58713d53fe
7d58950825c03b9966eee2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5629108c22c2b4e30cc45188dc4ccfaccaccc60b4039556acacefd5e8e680205,PodSandboxId:f080ba05af40fb69a73617adcb4d5ec5ff085b51de2842e3beaf99f6cce17e36,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722263020868685933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e44ac6c9087dd4b2a935089311a5831f,},Annotations:map[string]
string{io.kubernetes.container.hash: bbd2c012,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e63c1e8adea006c949983c0415f1680439a22ab7e7eef80fe28736e138c0bbfe,PodSandboxId:43fb3731c30340d38930d633ac662ced02ae5218bf18b81dd40064b0d1dcc644,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722263020883588283,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae8aee0c0e3f12fd9587843c4103aeff,},Annotations:ma
p[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa261328b16bcdd6b05d5ec620d854f6c4a2d081197c554c8e9feb9ac7bc632,PodSandboxId:20ac66bd7fe786d7335411012a28b23a7c362a6a17f72638b5c5776cd7706c8d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722263020849816798,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6186af066b5a6468d9db65e175ff32e,},Annotations:map[string]string{io
.kubernetes.container.hash: bcee8383,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035d53d316c3b2862b6d13703802b4be52b5d9820739c3b42bbaf79c8ba91f16,PodSandboxId:219e2305f437bb171ac3c0a0f3b74223746d86635ce9c0b7e6a6e3af42395abd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722263008866581913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5pskj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2464169b-59c6-4285-a694-b80fa182e201,},Annotations:map[string]string{io.kubernetes.container.hash: dc4a
d01c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c1298e8b84ad3b0261c3dfa53b6a2d6f814335046ead4ebebfdb2d14fba959c,PodSandboxId:5a6383a54d1e77ded8ab01b30dd057a67a5e0c26578a8f32c94e020ef228fca6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722263006566680784,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-2dhx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f848393f-5676-41d7-b9ba-1959514af9da,},Annotations:map[string]string{io.kubernetes.container.hash: bba2d262,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8135771c948561a1046c0e9adac1481fe77937def69d866e233bfb24215094ee,PodSandboxId:c931f01508c51215de6eff1d671c54e346716bbce3bf6fc3e445b238241eede9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722263006509165892,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-414966,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: e44ac6c9087dd4b2a935089311a5831f,},Annotations:map[string]string{io.kubernetes.container.hash: bbd2c012,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692a1cc699d0a690cd53786a86df00cfd6bcff1dcfd05e18a9f530ef5c585ce8,PodSandboxId:995dca5661b21350aa54a603da882962764c0918f734d3e8a9497c016f0367ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722263006440075888,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-414966,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: d6186af066b5a6468d9db65e175ff32e,},Annotations:map[string]string{io.kubernetes.container.hash: bcee8383,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f29c0422b10cd894aa147915e7eb22a41e8be365adb664295b301163d79cd6fd,PodSandboxId:a935867510ef6d7f6d089131f3fe5230af0b9c9fcc91ede1144e2d4427a262ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722263006397891697,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 58713d53fe7d58950825c03b9966eee2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4363a555ed2549ec21bd47edcd8dfe6d69560aa54510cdca5e9f15a1472466,PodSandboxId:a682bc94e6ebc40d37cae79f208e03bace0fb84833f89ec30e9630d0994644c6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722263006288983798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ae8aee0c0e3f12fd9587843c4103aeff,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=327f6650-da20-4046-83f0-de926f61e604 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:24:07 pause-414966 crio[3004]: time="2024-07-29 14:24:07.664344602Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=529921c0-cca1-457b-a395-d26150927a87 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:24:07 pause-414966 crio[3004]: time="2024-07-29 14:24:07.664435362Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=529921c0-cca1-457b-a395-d26150927a87 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:24:07 pause-414966 crio[3004]: time="2024-07-29 14:24:07.666228717Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a3d5bc5f-7b1f-4657-baba-ac6e39dc4a85 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:24:07 pause-414966 crio[3004]: time="2024-07-29 14:24:07.666713017Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722263047666684440,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3d5bc5f-7b1f-4657-baba-ac6e39dc4a85 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:24:07 pause-414966 crio[3004]: time="2024-07-29 14:24:07.667534107Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6842363f-2904-486a-b91a-9fb32b1eda11 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:24:07 pause-414966 crio[3004]: time="2024-07-29 14:24:07.667602804Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6842363f-2904-486a-b91a-9fb32b1eda11 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:24:07 pause-414966 crio[3004]: time="2024-07-29 14:24:07.667900483Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:71373f6ff651099e25c762001a374c3b6564582ffc269c23d29884b443476491,PodSandboxId:a5e006c90fadb3e498cd57f70e3edebe58d91eca9c20950bfb5828914af6f76d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722263025742246828,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2dhx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f848393f-5676-41d7-b9ba-1959514af9da,},Annotations:map[string]string{io.kubernetes.container.hash: bba2d262,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c8cefdf54165fa3a9c48868996d2ab3faed80f94192b71ea745f169ab2b3b5,PodSandboxId:219e2305f437bb171ac3c0a0f3b74223746d86635ce9c0b7e6a6e3af42395abd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722263025736181495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5pskj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2464169b-59c6-4285-a694-b80fa182e201,},Annotations:map[string]string{io.kubernetes.container.hash: dc4ad01c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7594cadd3acbd9143c9605a3d5654dd84b0a0cf1cee9fc13919ab5f533ac63,PodSandboxId:7cf54881f6b05148b6b04380af0cf5588782bf1bbf96095103de5305cbf7dcef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722263020879515462,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58713d53fe
7d58950825c03b9966eee2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5629108c22c2b4e30cc45188dc4ccfaccaccc60b4039556acacefd5e8e680205,PodSandboxId:f080ba05af40fb69a73617adcb4d5ec5ff085b51de2842e3beaf99f6cce17e36,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722263020868685933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e44ac6c9087dd4b2a935089311a5831f,},Annotations:map[string]
string{io.kubernetes.container.hash: bbd2c012,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e63c1e8adea006c949983c0415f1680439a22ab7e7eef80fe28736e138c0bbfe,PodSandboxId:43fb3731c30340d38930d633ac662ced02ae5218bf18b81dd40064b0d1dcc644,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722263020883588283,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae8aee0c0e3f12fd9587843c4103aeff,},Annotations:ma
p[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa261328b16bcdd6b05d5ec620d854f6c4a2d081197c554c8e9feb9ac7bc632,PodSandboxId:20ac66bd7fe786d7335411012a28b23a7c362a6a17f72638b5c5776cd7706c8d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722263020849816798,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6186af066b5a6468d9db65e175ff32e,},Annotations:map[string]string{io
.kubernetes.container.hash: bcee8383,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035d53d316c3b2862b6d13703802b4be52b5d9820739c3b42bbaf79c8ba91f16,PodSandboxId:219e2305f437bb171ac3c0a0f3b74223746d86635ce9c0b7e6a6e3af42395abd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722263008866581913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5pskj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2464169b-59c6-4285-a694-b80fa182e201,},Annotations:map[string]string{io.kubernetes.container.hash: dc4a
d01c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c1298e8b84ad3b0261c3dfa53b6a2d6f814335046ead4ebebfdb2d14fba959c,PodSandboxId:5a6383a54d1e77ded8ab01b30dd057a67a5e0c26578a8f32c94e020ef228fca6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722263006566680784,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-2dhx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f848393f-5676-41d7-b9ba-1959514af9da,},Annotations:map[string]string{io.kubernetes.container.hash: bba2d262,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8135771c948561a1046c0e9adac1481fe77937def69d866e233bfb24215094ee,PodSandboxId:c931f01508c51215de6eff1d671c54e346716bbce3bf6fc3e445b238241eede9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722263006509165892,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-414966,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: e44ac6c9087dd4b2a935089311a5831f,},Annotations:map[string]string{io.kubernetes.container.hash: bbd2c012,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692a1cc699d0a690cd53786a86df00cfd6bcff1dcfd05e18a9f530ef5c585ce8,PodSandboxId:995dca5661b21350aa54a603da882962764c0918f734d3e8a9497c016f0367ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722263006440075888,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-414966,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: d6186af066b5a6468d9db65e175ff32e,},Annotations:map[string]string{io.kubernetes.container.hash: bcee8383,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f29c0422b10cd894aa147915e7eb22a41e8be365adb664295b301163d79cd6fd,PodSandboxId:a935867510ef6d7f6d089131f3fe5230af0b9c9fcc91ede1144e2d4427a262ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722263006397891697,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 58713d53fe7d58950825c03b9966eee2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4363a555ed2549ec21bd47edcd8dfe6d69560aa54510cdca5e9f15a1472466,PodSandboxId:a682bc94e6ebc40d37cae79f208e03bace0fb84833f89ec30e9630d0994644c6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722263006288983798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ae8aee0c0e3f12fd9587843c4103aeff,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6842363f-2904-486a-b91a-9fb32b1eda11 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:24:07 pause-414966 crio[3004]: time="2024-07-29 14:24:07.723291634Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6b20b34e-bd28-49c2-bed3-05467f7680c1 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:24:07 pause-414966 crio[3004]: time="2024-07-29 14:24:07.723370989Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6b20b34e-bd28-49c2-bed3-05467f7680c1 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:24:07 pause-414966 crio[3004]: time="2024-07-29 14:24:07.724871950Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=94b1480b-7d08-4d0f-843e-457eb32a4b56 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:24:07 pause-414966 crio[3004]: time="2024-07-29 14:24:07.725501074Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722263047725475945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94b1480b-7d08-4d0f-843e-457eb32a4b56 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:24:07 pause-414966 crio[3004]: time="2024-07-29 14:24:07.726500557Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b470ead0-2148-44ed-b6af-d908a17bf454 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:24:07 pause-414966 crio[3004]: time="2024-07-29 14:24:07.726562055Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b470ead0-2148-44ed-b6af-d908a17bf454 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:24:07 pause-414966 crio[3004]: time="2024-07-29 14:24:07.727249394Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:71373f6ff651099e25c762001a374c3b6564582ffc269c23d29884b443476491,PodSandboxId:a5e006c90fadb3e498cd57f70e3edebe58d91eca9c20950bfb5828914af6f76d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722263025742246828,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2dhx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f848393f-5676-41d7-b9ba-1959514af9da,},Annotations:map[string]string{io.kubernetes.container.hash: bba2d262,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c8cefdf54165fa3a9c48868996d2ab3faed80f94192b71ea745f169ab2b3b5,PodSandboxId:219e2305f437bb171ac3c0a0f3b74223746d86635ce9c0b7e6a6e3af42395abd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722263025736181495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5pskj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2464169b-59c6-4285-a694-b80fa182e201,},Annotations:map[string]string{io.kubernetes.container.hash: dc4ad01c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7594cadd3acbd9143c9605a3d5654dd84b0a0cf1cee9fc13919ab5f533ac63,PodSandboxId:7cf54881f6b05148b6b04380af0cf5588782bf1bbf96095103de5305cbf7dcef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722263020879515462,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58713d53fe
7d58950825c03b9966eee2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5629108c22c2b4e30cc45188dc4ccfaccaccc60b4039556acacefd5e8e680205,PodSandboxId:f080ba05af40fb69a73617adcb4d5ec5ff085b51de2842e3beaf99f6cce17e36,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722263020868685933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e44ac6c9087dd4b2a935089311a5831f,},Annotations:map[string]
string{io.kubernetes.container.hash: bbd2c012,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e63c1e8adea006c949983c0415f1680439a22ab7e7eef80fe28736e138c0bbfe,PodSandboxId:43fb3731c30340d38930d633ac662ced02ae5218bf18b81dd40064b0d1dcc644,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722263020883588283,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae8aee0c0e3f12fd9587843c4103aeff,},Annotations:ma
p[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa261328b16bcdd6b05d5ec620d854f6c4a2d081197c554c8e9feb9ac7bc632,PodSandboxId:20ac66bd7fe786d7335411012a28b23a7c362a6a17f72638b5c5776cd7706c8d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722263020849816798,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6186af066b5a6468d9db65e175ff32e,},Annotations:map[string]string{io
.kubernetes.container.hash: bcee8383,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035d53d316c3b2862b6d13703802b4be52b5d9820739c3b42bbaf79c8ba91f16,PodSandboxId:219e2305f437bb171ac3c0a0f3b74223746d86635ce9c0b7e6a6e3af42395abd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722263008866581913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5pskj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2464169b-59c6-4285-a694-b80fa182e201,},Annotations:map[string]string{io.kubernetes.container.hash: dc4a
d01c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c1298e8b84ad3b0261c3dfa53b6a2d6f814335046ead4ebebfdb2d14fba959c,PodSandboxId:5a6383a54d1e77ded8ab01b30dd057a67a5e0c26578a8f32c94e020ef228fca6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722263006566680784,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-2dhx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f848393f-5676-41d7-b9ba-1959514af9da,},Annotations:map[string]string{io.kubernetes.container.hash: bba2d262,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8135771c948561a1046c0e9adac1481fe77937def69d866e233bfb24215094ee,PodSandboxId:c931f01508c51215de6eff1d671c54e346716bbce3bf6fc3e445b238241eede9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722263006509165892,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-414966,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: e44ac6c9087dd4b2a935089311a5831f,},Annotations:map[string]string{io.kubernetes.container.hash: bbd2c012,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692a1cc699d0a690cd53786a86df00cfd6bcff1dcfd05e18a9f530ef5c585ce8,PodSandboxId:995dca5661b21350aa54a603da882962764c0918f734d3e8a9497c016f0367ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722263006440075888,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-414966,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: d6186af066b5a6468d9db65e175ff32e,},Annotations:map[string]string{io.kubernetes.container.hash: bcee8383,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f29c0422b10cd894aa147915e7eb22a41e8be365adb664295b301163d79cd6fd,PodSandboxId:a935867510ef6d7f6d089131f3fe5230af0b9c9fcc91ede1144e2d4427a262ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722263006397891697,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 58713d53fe7d58950825c03b9966eee2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4363a555ed2549ec21bd47edcd8dfe6d69560aa54510cdca5e9f15a1472466,PodSandboxId:a682bc94e6ebc40d37cae79f208e03bace0fb84833f89ec30e9630d0994644c6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722263006288983798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ae8aee0c0e3f12fd9587843c4103aeff,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b470ead0-2148-44ed-b6af-d908a17bf454 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:24:07 pause-414966 crio[3004]: time="2024-07-29 14:24:07.784694142Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1fdc4f65-141a-4ef1-a3fa-c828d15594a8 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:24:07 pause-414966 crio[3004]: time="2024-07-29 14:24:07.784839021Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1fdc4f65-141a-4ef1-a3fa-c828d15594a8 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:24:07 pause-414966 crio[3004]: time="2024-07-29 14:24:07.786387146Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fabc355e-5e41-4acb-baeb-5989593bb58a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:24:07 pause-414966 crio[3004]: time="2024-07-29 14:24:07.786969044Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722263047786931749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fabc355e-5e41-4acb-baeb-5989593bb58a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:24:07 pause-414966 crio[3004]: time="2024-07-29 14:24:07.787723723Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d467105-051d-4171-b9b0-6255e88c27ad name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:24:07 pause-414966 crio[3004]: time="2024-07-29 14:24:07.787831368Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d467105-051d-4171-b9b0-6255e88c27ad name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:24:07 pause-414966 crio[3004]: time="2024-07-29 14:24:07.788382162Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:71373f6ff651099e25c762001a374c3b6564582ffc269c23d29884b443476491,PodSandboxId:a5e006c90fadb3e498cd57f70e3edebe58d91eca9c20950bfb5828914af6f76d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722263025742246828,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2dhx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f848393f-5676-41d7-b9ba-1959514af9da,},Annotations:map[string]string{io.kubernetes.container.hash: bba2d262,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c8cefdf54165fa3a9c48868996d2ab3faed80f94192b71ea745f169ab2b3b5,PodSandboxId:219e2305f437bb171ac3c0a0f3b74223746d86635ce9c0b7e6a6e3af42395abd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722263025736181495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5pskj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2464169b-59c6-4285-a694-b80fa182e201,},Annotations:map[string]string{io.kubernetes.container.hash: dc4ad01c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7594cadd3acbd9143c9605a3d5654dd84b0a0cf1cee9fc13919ab5f533ac63,PodSandboxId:7cf54881f6b05148b6b04380af0cf5588782bf1bbf96095103de5305cbf7dcef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722263020879515462,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58713d53fe
7d58950825c03b9966eee2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5629108c22c2b4e30cc45188dc4ccfaccaccc60b4039556acacefd5e8e680205,PodSandboxId:f080ba05af40fb69a73617adcb4d5ec5ff085b51de2842e3beaf99f6cce17e36,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722263020868685933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e44ac6c9087dd4b2a935089311a5831f,},Annotations:map[string]
string{io.kubernetes.container.hash: bbd2c012,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e63c1e8adea006c949983c0415f1680439a22ab7e7eef80fe28736e138c0bbfe,PodSandboxId:43fb3731c30340d38930d633ac662ced02ae5218bf18b81dd40064b0d1dcc644,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722263020883588283,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae8aee0c0e3f12fd9587843c4103aeff,},Annotations:ma
p[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa261328b16bcdd6b05d5ec620d854f6c4a2d081197c554c8e9feb9ac7bc632,PodSandboxId:20ac66bd7fe786d7335411012a28b23a7c362a6a17f72638b5c5776cd7706c8d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722263020849816798,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6186af066b5a6468d9db65e175ff32e,},Annotations:map[string]string{io
.kubernetes.container.hash: bcee8383,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035d53d316c3b2862b6d13703802b4be52b5d9820739c3b42bbaf79c8ba91f16,PodSandboxId:219e2305f437bb171ac3c0a0f3b74223746d86635ce9c0b7e6a6e3af42395abd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722263008866581913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5pskj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2464169b-59c6-4285-a694-b80fa182e201,},Annotations:map[string]string{io.kubernetes.container.hash: dc4a
d01c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c1298e8b84ad3b0261c3dfa53b6a2d6f814335046ead4ebebfdb2d14fba959c,PodSandboxId:5a6383a54d1e77ded8ab01b30dd057a67a5e0c26578a8f32c94e020ef228fca6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722263006566680784,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-2dhx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f848393f-5676-41d7-b9ba-1959514af9da,},Annotations:map[string]string{io.kubernetes.container.hash: bba2d262,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8135771c948561a1046c0e9adac1481fe77937def69d866e233bfb24215094ee,PodSandboxId:c931f01508c51215de6eff1d671c54e346716bbce3bf6fc3e445b238241eede9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722263006509165892,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-414966,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: e44ac6c9087dd4b2a935089311a5831f,},Annotations:map[string]string{io.kubernetes.container.hash: bbd2c012,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692a1cc699d0a690cd53786a86df00cfd6bcff1dcfd05e18a9f530ef5c585ce8,PodSandboxId:995dca5661b21350aa54a603da882962764c0918f734d3e8a9497c016f0367ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722263006440075888,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-414966,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: d6186af066b5a6468d9db65e175ff32e,},Annotations:map[string]string{io.kubernetes.container.hash: bcee8383,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f29c0422b10cd894aa147915e7eb22a41e8be365adb664295b301163d79cd6fd,PodSandboxId:a935867510ef6d7f6d089131f3fe5230af0b9c9fcc91ede1144e2d4427a262ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722263006397891697,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 58713d53fe7d58950825c03b9966eee2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4363a555ed2549ec21bd47edcd8dfe6d69560aa54510cdca5e9f15a1472466,PodSandboxId:a682bc94e6ebc40d37cae79f208e03bace0fb84833f89ec30e9630d0994644c6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722263006288983798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-414966,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ae8aee0c0e3f12fd9587843c4103aeff,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d467105-051d-4171-b9b0-6255e88c27ad name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	71373f6ff6510       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   22 seconds ago      Running             kube-proxy                2                   a5e006c90fadb       kube-proxy-2dhx5
	56c8cefdf5416       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   22 seconds ago      Running             coredns                   2                   219e2305f437b       coredns-7db6d8ff4d-5pskj
	e63c1e8adea00       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   26 seconds ago      Running             kube-controller-manager   2                   43fb3731c3034       kube-controller-manager-pause-414966
	4b7594cadd3ac       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   26 seconds ago      Running             kube-scheduler            2                   7cf54881f6b05       kube-scheduler-pause-414966
	5629108c22c2b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   27 seconds ago      Running             etcd                      2                   f080ba05af40f       etcd-pause-414966
	cfa261328b16b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   27 seconds ago      Running             kube-apiserver            2                   20ac66bd7fe78       kube-apiserver-pause-414966
	035d53d316c3b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   39 seconds ago      Exited              coredns                   1                   219e2305f437b       coredns-7db6d8ff4d-5pskj
	8c1298e8b84ad       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   41 seconds ago      Exited              kube-proxy                1                   5a6383a54d1e7       kube-proxy-2dhx5
	8135771c94856       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   41 seconds ago      Exited              etcd                      1                   c931f01508c51       etcd-pause-414966
	692a1cc699d0a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   41 seconds ago      Exited              kube-apiserver            1                   995dca5661b21       kube-apiserver-pause-414966
	f29c0422b10cd       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   41 seconds ago      Exited              kube-scheduler            1                   a935867510ef6       kube-scheduler-pause-414966
	1b4363a555ed2       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   41 seconds ago      Exited              kube-controller-manager   1                   a682bc94e6ebc       kube-controller-manager-pause-414966
	
	
	==> coredns [035d53d316c3b2862b6d13703802b4be52b5d9820739c3b42bbaf79c8ba91f16] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:40705 - 54386 "HINFO IN 7183347586325243228.5998467049896104713. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.654483181s
	
	
	==> coredns [56c8cefdf54165fa3a9c48868996d2ab3faed80f94192b71ea745f169ab2b3b5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:34032 - 1873 "HINFO IN 6959458377755625769.4473733508205659263. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008494837s
	
	
	==> describe nodes <==
	Name:               pause-414966
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-414966
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=pause-414966
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T14_22_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 14:22:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-414966
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 14:24:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 14:23:44 +0000   Mon, 29 Jul 2024 14:22:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 14:23:44 +0000   Mon, 29 Jul 2024 14:22:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 14:23:44 +0000   Mon, 29 Jul 2024 14:22:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 14:23:44 +0000   Mon, 29 Jul 2024 14:22:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.133
	  Hostname:    pause-414966
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 38e4ca4e96464f05b8b461e155716fcb
	  System UUID:                38e4ca4e-9646-4f05-b8b4-61e155716fcb
	  Boot ID:                    35273879-f3ba-43eb-8b3c-d89cb8b4a4db
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-5pskj                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     94s
	  kube-system                 etcd-pause-414966                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         108s
	  kube-system                 kube-apiserver-pause-414966             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kube-controller-manager-pause-414966    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kube-proxy-2dhx5                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-scheduler-pause-414966             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 92s                  kube-proxy       
	  Normal  Starting                 22s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  108s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  108s (x2 over 108s)  kubelet          Node pause-414966 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s (x2 over 108s)  kubelet          Node pause-414966 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s (x2 over 108s)  kubelet          Node pause-414966 status is now: NodeHasSufficientPID
	  Normal  Starting                 108s                 kubelet          Starting kubelet.
	  Normal  NodeReady                107s                 kubelet          Node pause-414966 status is now: NodeReady
	  Normal  RegisteredNode           95s                  node-controller  Node pause-414966 event: Registered Node pause-414966 in Controller
	  Normal  Starting                 28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s (x8 over 28s)    kubelet          Node pause-414966 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s (x8 over 28s)    kubelet          Node pause-414966 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s (x7 over 28s)    kubelet          Node pause-414966 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11s                  node-controller  Node pause-414966 event: Registered Node pause-414966 in Controller
	
	
	==> dmesg <==
	[Jul29 14:22] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.066117] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075068] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.197157] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.141013] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.300364] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.333629] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +0.062397] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.869332] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +1.180104] kauditd_printk_skb: 82 callbacks suppressed
	[  +4.869592] systemd-fstab-generator[1270]: Ignoring "noauto" option for root device
	[  +2.930241] kauditd_printk_skb: 33 callbacks suppressed
	[ +10.991740] systemd-fstab-generator[1492]: Ignoring "noauto" option for root device
	[ +11.481494] kauditd_printk_skb: 88 callbacks suppressed
	[Jul29 14:23] systemd-fstab-generator[2374]: Ignoring "noauto" option for root device
	[  +0.150342] systemd-fstab-generator[2386]: Ignoring "noauto" option for root device
	[  +0.189314] systemd-fstab-generator[2400]: Ignoring "noauto" option for root device
	[  +0.248779] systemd-fstab-generator[2488]: Ignoring "noauto" option for root device
	[  +0.901925] systemd-fstab-generator[2862]: Ignoring "noauto" option for root device
	[  +1.176563] systemd-fstab-generator[3158]: Ignoring "noauto" option for root device
	[ +12.017254] systemd-fstab-generator[3628]: Ignoring "noauto" option for root device
	[  +0.084583] kauditd_printk_skb: 243 callbacks suppressed
	[  +5.535235] kauditd_printk_skb: 40 callbacks suppressed
	[ +11.236402] kauditd_printk_skb: 2 callbacks suppressed
	[Jul29 14:24] systemd-fstab-generator[4065]: Ignoring "noauto" option for root device
	
	
	==> etcd [5629108c22c2b4e30cc45188dc4ccfaccaccc60b4039556acacefd5e8e680205] <==
	{"level":"info","ts":"2024-07-29T14:23:41.271612Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T14:23:41.27164Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T14:23:41.27192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cc3c8f81f42b93 switched to configuration voters=(706005828648577939)"}
	{"level":"info","ts":"2024-07-29T14:23:41.272005Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f95334f23140be1c","local-member-id":"9cc3c8f81f42b93","added-peer-id":"9cc3c8f81f42b93","added-peer-peer-urls":["https://192.168.50.133:2380"]}
	{"level":"info","ts":"2024-07-29T14:23:41.272181Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f95334f23140be1c","local-member-id":"9cc3c8f81f42b93","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:23:41.272234Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:23:41.297457Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T14:23:41.297802Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9cc3c8f81f42b93","initial-advertise-peer-urls":["https://192.168.50.133:2380"],"listen-peer-urls":["https://192.168.50.133:2380"],"advertise-client-urls":["https://192.168.50.133:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.133:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T14:23:41.300133Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T14:23:41.300262Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.133:2380"}
	{"level":"info","ts":"2024-07-29T14:23:41.300291Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.133:2380"}
	{"level":"info","ts":"2024-07-29T14:23:43.117545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cc3c8f81f42b93 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T14:23:43.117607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cc3c8f81f42b93 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T14:23:43.117707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cc3c8f81f42b93 received MsgPreVoteResp from 9cc3c8f81f42b93 at term 2"}
	{"level":"info","ts":"2024-07-29T14:23:43.117724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cc3c8f81f42b93 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T14:23:43.117731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cc3c8f81f42b93 received MsgVoteResp from 9cc3c8f81f42b93 at term 3"}
	{"level":"info","ts":"2024-07-29T14:23:43.117742Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cc3c8f81f42b93 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T14:23:43.117753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9cc3c8f81f42b93 elected leader 9cc3c8f81f42b93 at term 3"}
	{"level":"info","ts":"2024-07-29T14:23:43.124267Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T14:23:43.124216Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"9cc3c8f81f42b93","local-member-attributes":"{Name:pause-414966 ClientURLs:[https://192.168.50.133:2379]}","request-path":"/0/members/9cc3c8f81f42b93/attributes","cluster-id":"f95334f23140be1c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T14:23:43.125423Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T14:23:43.125845Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T14:23:43.125902Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T14:23:43.126944Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.133:2379"}
	{"level":"info","ts":"2024-07-29T14:23:43.128467Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [8135771c948561a1046c0e9adac1481fe77937def69d866e233bfb24215094ee] <==
	{"level":"warn","ts":"2024-07-29T14:23:27.068256Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-07-29T14:23:27.068374Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.50.133:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.50.133:2380","--initial-cluster=pause-414966=https://192.168.50.133:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.50.133:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.50.133:2380","--name=pause-414966","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trus
ted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-07-29T14:23:27.068781Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-07-29T14:23:27.068845Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-07-29T14:23:27.068858Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.50.133:2380"]}
	{"level":"info","ts":"2024-07-29T14:23:27.068896Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T14:23:27.069854Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.133:2379"]}
	{"level":"info","ts":"2024-07-29T14:23:27.070015Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-414966","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.50.133:2380"],"listen-peer-urls":["https://192.168.50.133:2380"],"advertise-client-urls":["https://192.168.50.133:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.133:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cl
uster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-07-29T14:23:27.079687Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"9.340642ms"}
	{"level":"info","ts":"2024-07-29T14:23:27.094996Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-29T14:23:27.10368Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"f95334f23140be1c","local-member-id":"9cc3c8f81f42b93","commit-index":427}
	{"level":"info","ts":"2024-07-29T14:23:27.103846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cc3c8f81f42b93 switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-29T14:23:27.103909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cc3c8f81f42b93 became follower at term 2"}
	{"level":"info","ts":"2024-07-29T14:23:27.103946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 9cc3c8f81f42b93 [peers: [], term: 2, commit: 427, applied: 0, lastindex: 427, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-29T14:23:27.125208Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-29T14:23:27.147764Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":407}
	
	
	==> kernel <==
	 14:24:08 up 2 min,  0 users,  load average: 1.31, 0.52, 0.19
	Linux pause-414966 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [692a1cc699d0a690cd53786a86df00cfd6bcff1dcfd05e18a9f530ef5c585ce8] <==
	
	
	==> kube-apiserver [cfa261328b16bcdd6b05d5ec620d854f6c4a2d081197c554c8e9feb9ac7bc632] <==
	I0729 14:23:44.592874       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 14:23:44.594191       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 14:23:44.596254       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 14:23:44.596341       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 14:23:44.596398       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 14:23:44.596420       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 14:23:44.596791       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 14:23:44.598760       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 14:23:44.598814       1 policy_source.go:224] refreshing policies
	I0729 14:23:44.600453       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 14:23:44.600488       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 14:23:44.600555       1 aggregator.go:165] initial CRD sync complete...
	I0729 14:23:44.600594       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 14:23:44.600616       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 14:23:44.600638       1 cache.go:39] Caches are synced for autoregister controller
	E0729 14:23:44.604302       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 14:23:44.627746       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 14:23:45.401875       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 14:23:46.297390       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 14:23:46.316722       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 14:23:46.360387       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 14:23:46.392674       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 14:23:46.400409       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 14:23:57.029764       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 14:23:57.059261       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [1b4363a555ed2549ec21bd47edcd8dfe6d69560aa54510cdca5e9f15a1472466] <==
	
	
	==> kube-controller-manager [e63c1e8adea006c949983c0415f1680439a22ab7e7eef80fe28736e138c0bbfe] <==
	I0729 14:23:57.044197       1 shared_informer.go:320] Caches are synced for namespace
	I0729 14:23:57.046796       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0729 14:23:57.046914       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="50.323µs"
	I0729 14:23:57.049226       1 shared_informer.go:320] Caches are synced for endpoint
	I0729 14:23:57.049883       1 shared_informer.go:320] Caches are synced for TTL
	I0729 14:23:57.050993       1 shared_informer.go:320] Caches are synced for service account
	I0729 14:23:57.052333       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0729 14:23:57.054191       1 shared_informer.go:320] Caches are synced for taint
	I0729 14:23:57.054298       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0729 14:23:57.054386       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-414966"
	I0729 14:23:57.054437       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 14:23:57.056052       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0729 14:23:57.057208       1 shared_informer.go:320] Caches are synced for cronjob
	I0729 14:23:57.057267       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0729 14:23:57.057631       1 shared_informer.go:320] Caches are synced for crt configmap
	I0729 14:23:57.068706       1 shared_informer.go:320] Caches are synced for GC
	I0729 14:23:57.130773       1 shared_informer.go:320] Caches are synced for HPA
	I0729 14:23:57.173447       1 shared_informer.go:320] Caches are synced for deployment
	I0729 14:23:57.232262       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 14:23:57.234409       1 shared_informer.go:320] Caches are synced for disruption
	I0729 14:23:57.248753       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0729 14:23:57.250190       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 14:23:57.678772       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 14:23:57.697941       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 14:23:57.697970       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [71373f6ff651099e25c762001a374c3b6564582ffc269c23d29884b443476491] <==
	I0729 14:23:45.937949       1 server_linux.go:69] "Using iptables proxy"
	I0729 14:23:45.947555       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.133"]
	I0729 14:23:45.980377       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 14:23:45.980409       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 14:23:45.980423       1 server_linux.go:165] "Using iptables Proxier"
	I0729 14:23:45.982900       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 14:23:45.983207       1 server.go:872] "Version info" version="v1.30.3"
	I0729 14:23:45.983399       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 14:23:45.984516       1 config.go:192] "Starting service config controller"
	I0729 14:23:45.985962       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 14:23:45.984899       1 config.go:101] "Starting endpoint slice config controller"
	I0729 14:23:45.986008       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 14:23:45.985526       1 config.go:319] "Starting node config controller"
	I0729 14:23:45.986016       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 14:23:46.086974       1 shared_informer.go:320] Caches are synced for node config
	I0729 14:23:46.087162       1 shared_informer.go:320] Caches are synced for service config
	I0729 14:23:46.087189       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [8c1298e8b84ad3b0261c3dfa53b6a2d6f814335046ead4ebebfdb2d14fba959c] <==
	
	
	==> kube-scheduler [4b7594cadd3acbd9143c9605a3d5654dd84b0a0cf1cee9fc13919ab5f533ac63] <==
	I0729 14:23:42.029607       1 serving.go:380] Generated self-signed cert in-memory
	W0729 14:23:44.476597       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 14:23:44.476810       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 14:23:44.476915       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 14:23:44.476942       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 14:23:44.531700       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 14:23:44.531854       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 14:23:44.540888       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 14:23:44.541063       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 14:23:44.546574       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 14:23:44.546706       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 14:23:44.646880       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f29c0422b10cd894aa147915e7eb22a41e8be365adb664295b301163d79cd6fd] <==
	
	
	==> kubelet <==
	Jul 29 14:23:40 pause-414966 kubelet[3635]: I0729 14:23:40.541840    3635 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae8aee0c0e3f12fd9587843c4103aeff-ca-certs\") pod \"kube-controller-manager-pause-414966\" (UID: \"ae8aee0c0e3f12fd9587843c4103aeff\") " pod="kube-system/kube-controller-manager-pause-414966"
	Jul 29 14:23:40 pause-414966 kubelet[3635]: E0729 14:23:40.576760    3635 file.go:108] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/etcd.yaml\": /etc/kubernetes/manifests/etcd.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file"
	Jul 29 14:23:40 pause-414966 kubelet[3635]: E0729 14:23:40.642328    3635 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-414966?timeout=10s\": dial tcp 192.168.50.133:8443: connect: connection refused" interval="400ms"
	Jul 29 14:23:40 pause-414966 kubelet[3635]: I0729 14:23:40.738536    3635 kubelet_node_status.go:73] "Attempting to register node" node="pause-414966"
	Jul 29 14:23:40 pause-414966 kubelet[3635]: E0729 14:23:40.739718    3635 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.133:8443: connect: connection refused" node="pause-414966"
	Jul 29 14:23:40 pause-414966 kubelet[3635]: I0729 14:23:40.829318    3635 scope.go:117] "RemoveContainer" containerID="8135771c948561a1046c0e9adac1481fe77937def69d866e233bfb24215094ee"
	Jul 29 14:23:40 pause-414966 kubelet[3635]: I0729 14:23:40.832778    3635 scope.go:117] "RemoveContainer" containerID="692a1cc699d0a690cd53786a86df00cfd6bcff1dcfd05e18a9f530ef5c585ce8"
	Jul 29 14:23:40 pause-414966 kubelet[3635]: I0729 14:23:40.835892    3635 scope.go:117] "RemoveContainer" containerID="1b4363a555ed2549ec21bd47edcd8dfe6d69560aa54510cdca5e9f15a1472466"
	Jul 29 14:23:40 pause-414966 kubelet[3635]: I0729 14:23:40.839279    3635 scope.go:117] "RemoveContainer" containerID="f29c0422b10cd894aa147915e7eb22a41e8be365adb664295b301163d79cd6fd"
	Jul 29 14:23:41 pause-414966 kubelet[3635]: E0729 14:23:41.045071    3635 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-414966?timeout=10s\": dial tcp 192.168.50.133:8443: connect: connection refused" interval="800ms"
	Jul 29 14:23:41 pause-414966 kubelet[3635]: I0729 14:23:41.141989    3635 kubelet_node_status.go:73] "Attempting to register node" node="pause-414966"
	Jul 29 14:23:41 pause-414966 kubelet[3635]: E0729 14:23:41.143905    3635 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.133:8443: connect: connection refused" node="pause-414966"
	Jul 29 14:23:41 pause-414966 kubelet[3635]: I0729 14:23:41.945410    3635 kubelet_node_status.go:73] "Attempting to register node" node="pause-414966"
	Jul 29 14:23:44 pause-414966 kubelet[3635]: I0729 14:23:44.666817    3635 kubelet_node_status.go:112] "Node was previously registered" node="pause-414966"
	Jul 29 14:23:44 pause-414966 kubelet[3635]: I0729 14:23:44.667260    3635 kubelet_node_status.go:76] "Successfully registered node" node="pause-414966"
	Jul 29 14:23:44 pause-414966 kubelet[3635]: I0729 14:23:44.668726    3635 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 14:23:44 pause-414966 kubelet[3635]: I0729 14:23:44.673512    3635 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 14:23:45 pause-414966 kubelet[3635]: I0729 14:23:45.409992    3635 apiserver.go:52] "Watching apiserver"
	Jul 29 14:23:45 pause-414966 kubelet[3635]: I0729 14:23:45.412937    3635 topology_manager.go:215] "Topology Admit Handler" podUID="f848393f-5676-41d7-b9ba-1959514af9da" podNamespace="kube-system" podName="kube-proxy-2dhx5"
	Jul 29 14:23:45 pause-414966 kubelet[3635]: I0729 14:23:45.413082    3635 topology_manager.go:215] "Topology Admit Handler" podUID="2464169b-59c6-4285-a694-b80fa182e201" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5pskj"
	Jul 29 14:23:45 pause-414966 kubelet[3635]: I0729 14:23:45.433700    3635 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 29 14:23:45 pause-414966 kubelet[3635]: I0729 14:23:45.491875    3635 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f848393f-5676-41d7-b9ba-1959514af9da-lib-modules\") pod \"kube-proxy-2dhx5\" (UID: \"f848393f-5676-41d7-b9ba-1959514af9da\") " pod="kube-system/kube-proxy-2dhx5"
	Jul 29 14:23:45 pause-414966 kubelet[3635]: I0729 14:23:45.492001    3635 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f848393f-5676-41d7-b9ba-1959514af9da-xtables-lock\") pod \"kube-proxy-2dhx5\" (UID: \"f848393f-5676-41d7-b9ba-1959514af9da\") " pod="kube-system/kube-proxy-2dhx5"
	Jul 29 14:23:45 pause-414966 kubelet[3635]: I0729 14:23:45.714163    3635 scope.go:117] "RemoveContainer" containerID="035d53d316c3b2862b6d13703802b4be52b5d9820739c3b42bbaf79c8ba91f16"
	Jul 29 14:23:45 pause-414966 kubelet[3635]: I0729 14:23:45.716021    3635 scope.go:117] "RemoveContainer" containerID="8c1298e8b84ad3b0261c3dfa53b6a2d6f814335046ead4ebebfdb2d14fba959c"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-414966 -n pause-414966
helpers_test.go:261: (dbg) Run:  kubectl --context pause-414966 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (53.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (272.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-360866 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-360866 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m32.172729341s)

                                                
                                                
-- stdout --
	* [old-k8s-version-360866] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19338
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-360866" primary control-plane node in "old-k8s-version-360866" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 14:28:31.273056 1033083 out.go:291] Setting OutFile to fd 1 ...
	I0729 14:28:31.273272 1033083 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:28:31.273314 1033083 out.go:304] Setting ErrFile to fd 2...
	I0729 14:28:31.273327 1033083 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:28:31.273544 1033083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 14:28:31.274247 1033083 out.go:298] Setting JSON to false
	I0729 14:28:31.275820 1033083 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":15063,"bootTime":1722248248,"procs":337,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 14:28:31.275950 1033083 start.go:139] virtualization: kvm guest
	I0729 14:28:31.278703 1033083 out.go:177] * [old-k8s-version-360866] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 14:28:31.280095 1033083 notify.go:220] Checking for updates...
	I0729 14:28:31.280180 1033083 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 14:28:31.283448 1033083 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 14:28:31.285434 1033083 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:28:31.286790 1033083 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:28:31.288073 1033083 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 14:28:31.289349 1033083 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 14:28:31.291328 1033083 config.go:182] Loaded profile config "bridge-513289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:28:31.291489 1033083 config.go:182] Loaded profile config "enable-default-cni-513289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:28:31.291610 1033083 config.go:182] Loaded profile config "flannel-513289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:28:31.291787 1033083 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 14:28:31.340237 1033083 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 14:28:31.341530 1033083 start.go:297] selected driver: kvm2
	I0729 14:28:31.341552 1033083 start.go:901] validating driver "kvm2" against <nil>
	I0729 14:28:31.341567 1033083 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 14:28:31.342742 1033083 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:28:31.344535 1033083 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19338-974764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 14:28:31.362167 1033083 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 14:28:31.362247 1033083 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 14:28:31.362546 1033083 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:28:31.362591 1033083 cni.go:84] Creating CNI manager for ""
	I0729 14:28:31.362602 1033083 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:28:31.362617 1033083 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 14:28:31.362709 1033083 start.go:340] cluster config:
	{Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:28:31.362865 1033083 iso.go:125] acquiring lock: {Name:mk2bc72146110e230952d77b90cad2ea8182c9d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:28:31.364535 1033083 out.go:177] * Starting "old-k8s-version-360866" primary control-plane node in "old-k8s-version-360866" cluster
	I0729 14:28:31.365828 1033083 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 14:28:31.365874 1033083 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 14:28:31.365888 1033083 cache.go:56] Caching tarball of preloaded images
	I0729 14:28:31.365982 1033083 preload.go:172] Found /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 14:28:31.366010 1033083 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 14:28:31.366126 1033083 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/config.json ...
	I0729 14:28:31.366150 1033083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/config.json: {Name:mk53ec475e7a6666c1fa7294f55c504c4c3e7a7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:28:31.370940 1033083 start.go:360] acquireMachinesLock for old-k8s-version-360866: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 14:28:31.371004 1033083 start.go:364] duration metric: took 33.263µs to acquireMachinesLock for "old-k8s-version-360866"
	I0729 14:28:31.371030 1033083 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:28:31.371110 1033083 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 14:28:31.372913 1033083 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 14:28:31.373096 1033083 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:28:31.373155 1033083 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:28:31.390783 1033083 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34027
	I0729 14:28:31.391335 1033083 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:28:31.392000 1033083 main.go:141] libmachine: Using API Version  1
	I0729 14:28:31.392027 1033083 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:28:31.392541 1033083 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:28:31.392800 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetMachineName
	I0729 14:28:31.392978 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:28:31.393145 1033083 start.go:159] libmachine.API.Create for "old-k8s-version-360866" (driver="kvm2")
	I0729 14:28:31.393176 1033083 client.go:168] LocalClient.Create starting
	I0729 14:28:31.393221 1033083 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem
	I0729 14:28:31.393265 1033083 main.go:141] libmachine: Decoding PEM data...
	I0729 14:28:31.393288 1033083 main.go:141] libmachine: Parsing certificate...
	I0729 14:28:31.393369 1033083 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem
	I0729 14:28:31.393393 1033083 main.go:141] libmachine: Decoding PEM data...
	I0729 14:28:31.393412 1033083 main.go:141] libmachine: Parsing certificate...
	I0729 14:28:31.393436 1033083 main.go:141] libmachine: Running pre-create checks...
	I0729 14:28:31.393453 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .PreCreateCheck
	I0729 14:28:31.393863 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetConfigRaw
	I0729 14:28:31.394348 1033083 main.go:141] libmachine: Creating machine...
	I0729 14:28:31.394366 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .Create
	I0729 14:28:31.394537 1033083 main.go:141] libmachine: (old-k8s-version-360866) Creating KVM machine...
	I0729 14:28:31.396044 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | found existing default KVM network
	I0729 14:28:31.398278 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:28:31.398076 1033122 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002017f0}
	I0729 14:28:31.398361 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | created network xml: 
	I0729 14:28:31.398395 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | <network>
	I0729 14:28:31.398407 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG |   <name>mk-old-k8s-version-360866</name>
	I0729 14:28:31.398415 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG |   <dns enable='no'/>
	I0729 14:28:31.398423 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG |   
	I0729 14:28:31.398431 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 14:28:31.398448 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG |     <dhcp>
	I0729 14:28:31.398456 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 14:28:31.398465 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG |     </dhcp>
	I0729 14:28:31.398473 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG |   </ip>
	I0729 14:28:31.398480 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG |   
	I0729 14:28:31.398491 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | </network>
	I0729 14:28:31.398501 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | 
	I0729 14:28:31.404417 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | trying to create private KVM network mk-old-k8s-version-360866 192.168.39.0/24...
	I0729 14:28:31.510743 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | private KVM network mk-old-k8s-version-360866 192.168.39.0/24 created
	I0729 14:28:31.510783 1033083 main.go:141] libmachine: (old-k8s-version-360866) Setting up store path in /home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866 ...
	I0729 14:28:31.510798 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:28:31.510713 1033122 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:28:31.510811 1033083 main.go:141] libmachine: (old-k8s-version-360866) Building disk image from file:///home/jenkins/minikube-integration/19338-974764/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 14:28:31.510835 1033083 main.go:141] libmachine: (old-k8s-version-360866) Downloading /home/jenkins/minikube-integration/19338-974764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19338-974764/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 14:28:31.925132 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:28:31.924932 1033122 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa...
	I0729 14:28:32.197741 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:28:32.197638 1033122 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/old-k8s-version-360866.rawdisk...
	I0729 14:28:32.197790 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | Writing magic tar header
	I0729 14:28:32.197808 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | Writing SSH key tar header
	I0729 14:28:32.197830 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:28:32.197810 1033122 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866 ...
	I0729 14:28:32.197953 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866
	I0729 14:28:32.197984 1033083 main.go:141] libmachine: (old-k8s-version-360866) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866 (perms=drwx------)
	I0729 14:28:32.198075 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube/machines
	I0729 14:28:32.198094 1033083 main.go:141] libmachine: (old-k8s-version-360866) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube/machines (perms=drwxr-xr-x)
	I0729 14:28:32.198116 1033083 main.go:141] libmachine: (old-k8s-version-360866) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube (perms=drwxr-xr-x)
	I0729 14:28:32.198131 1033083 main.go:141] libmachine: (old-k8s-version-360866) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764 (perms=drwxrwxr-x)
	I0729 14:28:32.198142 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:28:32.198159 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764
	I0729 14:28:32.198172 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 14:28:32.198193 1033083 main.go:141] libmachine: (old-k8s-version-360866) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 14:28:32.207440 1033083 main.go:141] libmachine: (old-k8s-version-360866) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 14:28:32.207464 1033083 main.go:141] libmachine: (old-k8s-version-360866) Creating domain...
	I0729 14:28:32.207480 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | Checking permissions on dir: /home/jenkins
	I0729 14:28:32.207488 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | Checking permissions on dir: /home
	I0729 14:28:32.207498 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | Skipping /home - not owner
	I0729 14:28:32.208146 1033083 main.go:141] libmachine: (old-k8s-version-360866) define libvirt domain using xml: 
	I0729 14:28:32.208170 1033083 main.go:141] libmachine: (old-k8s-version-360866) <domain type='kvm'>
	I0729 14:28:32.208180 1033083 main.go:141] libmachine: (old-k8s-version-360866)   <name>old-k8s-version-360866</name>
	I0729 14:28:32.208189 1033083 main.go:141] libmachine: (old-k8s-version-360866)   <memory unit='MiB'>2200</memory>
	I0729 14:28:32.208198 1033083 main.go:141] libmachine: (old-k8s-version-360866)   <vcpu>2</vcpu>
	I0729 14:28:32.208205 1033083 main.go:141] libmachine: (old-k8s-version-360866)   <features>
	I0729 14:28:32.208213 1033083 main.go:141] libmachine: (old-k8s-version-360866)     <acpi/>
	I0729 14:28:32.208220 1033083 main.go:141] libmachine: (old-k8s-version-360866)     <apic/>
	I0729 14:28:32.208228 1033083 main.go:141] libmachine: (old-k8s-version-360866)     <pae/>
	I0729 14:28:32.208236 1033083 main.go:141] libmachine: (old-k8s-version-360866)     
	I0729 14:28:32.208263 1033083 main.go:141] libmachine: (old-k8s-version-360866)   </features>
	I0729 14:28:32.208284 1033083 main.go:141] libmachine: (old-k8s-version-360866)   <cpu mode='host-passthrough'>
	I0729 14:28:32.208305 1033083 main.go:141] libmachine: (old-k8s-version-360866)   
	I0729 14:28:32.208316 1033083 main.go:141] libmachine: (old-k8s-version-360866)   </cpu>
	I0729 14:28:32.208325 1033083 main.go:141] libmachine: (old-k8s-version-360866)   <os>
	I0729 14:28:32.208333 1033083 main.go:141] libmachine: (old-k8s-version-360866)     <type>hvm</type>
	I0729 14:28:32.208346 1033083 main.go:141] libmachine: (old-k8s-version-360866)     <boot dev='cdrom'/>
	I0729 14:28:32.208365 1033083 main.go:141] libmachine: (old-k8s-version-360866)     <boot dev='hd'/>
	I0729 14:28:32.208374 1033083 main.go:141] libmachine: (old-k8s-version-360866)     <bootmenu enable='no'/>
	I0729 14:28:32.208381 1033083 main.go:141] libmachine: (old-k8s-version-360866)   </os>
	I0729 14:28:32.208389 1033083 main.go:141] libmachine: (old-k8s-version-360866)   <devices>
	I0729 14:28:32.208397 1033083 main.go:141] libmachine: (old-k8s-version-360866)     <disk type='file' device='cdrom'>
	I0729 14:28:32.208425 1033083 main.go:141] libmachine: (old-k8s-version-360866)       <source file='/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/boot2docker.iso'/>
	I0729 14:28:32.208438 1033083 main.go:141] libmachine: (old-k8s-version-360866)       <target dev='hdc' bus='scsi'/>
	I0729 14:28:32.208447 1033083 main.go:141] libmachine: (old-k8s-version-360866)       <readonly/>
	I0729 14:28:32.208454 1033083 main.go:141] libmachine: (old-k8s-version-360866)     </disk>
	I0729 14:28:32.208462 1033083 main.go:141] libmachine: (old-k8s-version-360866)     <disk type='file' device='disk'>
	I0729 14:28:32.208472 1033083 main.go:141] libmachine: (old-k8s-version-360866)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 14:28:32.208485 1033083 main.go:141] libmachine: (old-k8s-version-360866)       <source file='/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/old-k8s-version-360866.rawdisk'/>
	I0729 14:28:32.208493 1033083 main.go:141] libmachine: (old-k8s-version-360866)       <target dev='hda' bus='virtio'/>
	I0729 14:28:32.208501 1033083 main.go:141] libmachine: (old-k8s-version-360866)     </disk>
	I0729 14:28:32.208517 1033083 main.go:141] libmachine: (old-k8s-version-360866)     <interface type='network'>
	I0729 14:28:32.208534 1033083 main.go:141] libmachine: (old-k8s-version-360866)       <source network='mk-old-k8s-version-360866'/>
	I0729 14:28:32.208542 1033083 main.go:141] libmachine: (old-k8s-version-360866)       <model type='virtio'/>
	I0729 14:28:32.208551 1033083 main.go:141] libmachine: (old-k8s-version-360866)     </interface>
	I0729 14:28:32.208559 1033083 main.go:141] libmachine: (old-k8s-version-360866)     <interface type='network'>
	I0729 14:28:32.208568 1033083 main.go:141] libmachine: (old-k8s-version-360866)       <source network='default'/>
	I0729 14:28:32.208576 1033083 main.go:141] libmachine: (old-k8s-version-360866)       <model type='virtio'/>
	I0729 14:28:32.208584 1033083 main.go:141] libmachine: (old-k8s-version-360866)     </interface>
	I0729 14:28:32.208596 1033083 main.go:141] libmachine: (old-k8s-version-360866)     <serial type='pty'>
	I0729 14:28:32.208604 1033083 main.go:141] libmachine: (old-k8s-version-360866)       <target port='0'/>
	I0729 14:28:32.208611 1033083 main.go:141] libmachine: (old-k8s-version-360866)     </serial>
	I0729 14:28:32.208620 1033083 main.go:141] libmachine: (old-k8s-version-360866)     <console type='pty'>
	I0729 14:28:32.208627 1033083 main.go:141] libmachine: (old-k8s-version-360866)       <target type='serial' port='0'/>
	I0729 14:28:32.208635 1033083 main.go:141] libmachine: (old-k8s-version-360866)     </console>
	I0729 14:28:32.208643 1033083 main.go:141] libmachine: (old-k8s-version-360866)     <rng model='virtio'>
	I0729 14:28:32.208653 1033083 main.go:141] libmachine: (old-k8s-version-360866)       <backend model='random'>/dev/random</backend>
	I0729 14:28:32.208659 1033083 main.go:141] libmachine: (old-k8s-version-360866)     </rng>
	I0729 14:28:32.208683 1033083 main.go:141] libmachine: (old-k8s-version-360866)     
	I0729 14:28:32.208700 1033083 main.go:141] libmachine: (old-k8s-version-360866)     
	I0729 14:28:32.208713 1033083 main.go:141] libmachine: (old-k8s-version-360866)   </devices>
	I0729 14:28:32.208720 1033083 main.go:141] libmachine: (old-k8s-version-360866) </domain>
	I0729 14:28:32.208734 1033083 main.go:141] libmachine: (old-k8s-version-360866) 
	I0729 14:28:32.241655 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:3a:0b:69 in network default
	I0729 14:28:32.242441 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:32.242469 1033083 main.go:141] libmachine: (old-k8s-version-360866) Ensuring networks are active...
	I0729 14:28:32.243412 1033083 main.go:141] libmachine: (old-k8s-version-360866) Ensuring network default is active
	I0729 14:28:32.243780 1033083 main.go:141] libmachine: (old-k8s-version-360866) Ensuring network mk-old-k8s-version-360866 is active
	I0729 14:28:32.244387 1033083 main.go:141] libmachine: (old-k8s-version-360866) Getting domain xml...
	I0729 14:28:32.245193 1033083 main.go:141] libmachine: (old-k8s-version-360866) Creating domain...
	I0729 14:28:32.804896 1033083 main.go:141] libmachine: (old-k8s-version-360866) Waiting to get IP...
	I0729 14:28:32.806165 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:32.806698 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:28:32.806720 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:28:32.806608 1033122 retry.go:31] will retry after 228.322166ms: waiting for machine to come up
	I0729 14:28:33.037367 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:33.037905 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:28:33.037933 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:28:33.037816 1033122 retry.go:31] will retry after 249.995013ms: waiting for machine to come up
	I0729 14:28:33.289328 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:33.290065 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:28:33.290088 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:28:33.289968 1033122 retry.go:31] will retry after 349.584363ms: waiting for machine to come up
	I0729 14:28:33.641502 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:33.642096 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:28:33.642116 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:28:33.642030 1033122 retry.go:31] will retry after 444.332574ms: waiting for machine to come up
	I0729 14:28:34.088753 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:34.089039 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:28:34.089067 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:28:34.089032 1033122 retry.go:31] will retry after 707.593433ms: waiting for machine to come up
	I0729 14:28:34.798096 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:34.798562 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:28:34.798591 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:28:34.798533 1033122 retry.go:31] will retry after 594.671601ms: waiting for machine to come up
	I0729 14:28:35.395034 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:35.400533 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:28:35.400568 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:28:35.395395 1033122 retry.go:31] will retry after 1.016936876s: waiting for machine to come up
	I0729 14:28:36.414067 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:36.414581 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:28:36.414609 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:28:36.414563 1033122 retry.go:31] will retry after 1.003950185s: waiting for machine to come up
	I0729 14:28:37.419568 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:37.420189 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:28:37.420218 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:28:37.420136 1033122 retry.go:31] will retry after 1.723914839s: waiting for machine to come up
	I0729 14:28:39.145463 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:39.146109 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:28:39.146136 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:28:39.146069 1033122 retry.go:31] will retry after 1.797341647s: waiting for machine to come up
	I0729 14:28:40.945369 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:40.945982 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:28:40.946039 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:28:40.945961 1033122 retry.go:31] will retry after 2.608222134s: waiting for machine to come up
	I0729 14:28:43.555973 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:43.556568 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:28:43.556595 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:28:43.556525 1033122 retry.go:31] will retry after 2.82927867s: waiting for machine to come up
	I0729 14:28:46.389608 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:46.390141 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:28:46.390169 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:28:46.390099 1033122 retry.go:31] will retry after 4.208985533s: waiting for machine to come up
	I0729 14:28:50.602671 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:50.603215 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:28:50.603270 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:28:50.603178 1033122 retry.go:31] will retry after 3.815282729s: waiting for machine to come up
	I0729 14:28:54.421712 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:54.422272 1033083 main.go:141] libmachine: (old-k8s-version-360866) Found IP for machine: 192.168.39.71
	I0729 14:28:54.422296 1033083 main.go:141] libmachine: (old-k8s-version-360866) Reserving static IP address...
	I0729 14:28:54.422312 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has current primary IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:54.422694 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-360866", mac: "52:54:00:18:de:25", ip: "192.168.39.71"} in network mk-old-k8s-version-360866
	I0729 14:28:54.500886 1033083 main.go:141] libmachine: (old-k8s-version-360866) Reserved static IP address: 192.168.39.71
	I0729 14:28:54.500918 1033083 main.go:141] libmachine: (old-k8s-version-360866) Waiting for SSH to be available...
	I0729 14:28:54.500929 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | Getting to WaitForSSH function...
	I0729 14:28:54.504082 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:54.504583 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:28:47 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:minikube Clientid:01:52:54:00:18:de:25}
	I0729 14:28:54.504614 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:54.504712 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | Using SSH client type: external
	I0729 14:28:54.504739 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa (-rw-------)
	I0729 14:28:54.504767 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.71 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:28:54.504780 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | About to run SSH command:
	I0729 14:28:54.504799 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | exit 0
	I0729 14:28:54.633259 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | SSH cmd err, output: <nil>: 
	I0729 14:28:54.633520 1033083 main.go:141] libmachine: (old-k8s-version-360866) KVM machine creation complete!
	I0729 14:28:54.633833 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetConfigRaw
	I0729 14:28:54.634454 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:28:54.634777 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:28:54.635000 1033083 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 14:28:54.635013 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetState
	I0729 14:28:54.636303 1033083 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 14:28:54.636319 1033083 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 14:28:54.636327 1033083 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 14:28:54.636337 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:28:54.638977 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:54.639346 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:28:47 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:28:54.639389 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:54.639603 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:28:54.639835 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:28:54.640001 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:28:54.640152 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:28:54.640361 1033083 main.go:141] libmachine: Using SSH client type: native
	I0729 14:28:54.640642 1033083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:28:54.640661 1033083 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 14:28:54.748936 1033083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:28:54.748964 1033083 main.go:141] libmachine: Detecting the provisioner...
	I0729 14:28:54.748976 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:28:54.752709 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:54.753242 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:28:47 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:28:54.753294 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:54.753604 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:28:54.753877 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:28:54.754157 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:28:54.754345 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:28:54.754587 1033083 main.go:141] libmachine: Using SSH client type: native
	I0729 14:28:54.754839 1033083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:28:54.754860 1033083 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 14:28:54.867483 1033083 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 14:28:54.867596 1033083 main.go:141] libmachine: found compatible host: buildroot
	I0729 14:28:54.867607 1033083 main.go:141] libmachine: Provisioning with buildroot...
	I0729 14:28:54.867620 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetMachineName
	I0729 14:28:54.867868 1033083 buildroot.go:166] provisioning hostname "old-k8s-version-360866"
	I0729 14:28:54.867890 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetMachineName
	I0729 14:28:54.868048 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:28:54.870879 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:54.871281 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:28:47 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:28:54.871302 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:54.871535 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:28:54.871749 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:28:54.871909 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:28:54.872019 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:28:54.872124 1033083 main.go:141] libmachine: Using SSH client type: native
	I0729 14:28:54.872327 1033083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:28:54.872339 1033083 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-360866 && echo "old-k8s-version-360866" | sudo tee /etc/hostname
	I0729 14:28:55.013729 1033083 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-360866
	
	I0729 14:28:55.013760 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:28:55.017121 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:55.017530 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:28:47 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:28:55.017561 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:55.017769 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:28:55.018024 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:28:55.018265 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:28:55.018449 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:28:55.018655 1033083 main.go:141] libmachine: Using SSH client type: native
	I0729 14:28:55.018871 1033083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:28:55.018898 1033083 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-360866' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-360866/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-360866' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:28:55.142653 1033083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:28:55.142691 1033083 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:28:55.142730 1033083 buildroot.go:174] setting up certificates
	I0729 14:28:55.142744 1033083 provision.go:84] configureAuth start
	I0729 14:28:55.142759 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetMachineName
	I0729 14:28:55.143097 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:28:55.146500 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:55.146971 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:28:47 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:28:55.146994 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:55.147132 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:28:55.149756 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:55.150160 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:28:47 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:28:55.150198 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:55.150376 1033083 provision.go:143] copyHostCerts
	I0729 14:28:55.150452 1033083 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:28:55.150464 1033083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:28:55.150545 1033083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:28:55.150697 1033083 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:28:55.150711 1033083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:28:55.150749 1033083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:28:55.150927 1033083 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:28:55.150942 1033083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:28:55.150988 1033083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:28:55.151077 1033083 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-360866 san=[127.0.0.1 192.168.39.71 localhost minikube old-k8s-version-360866]
	I0729 14:28:55.255992 1033083 provision.go:177] copyRemoteCerts
	I0729 14:28:55.256060 1033083 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:28:55.256087 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:28:55.259030 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:55.259473 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:28:47 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:28:55.259509 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:55.259640 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:28:55.259835 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:28:55.259993 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:28:55.260161 1033083 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:28:55.344171 1033083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:28:55.376014 1033083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 14:28:55.401330 1033083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 14:28:55.431813 1033083 provision.go:87] duration metric: took 289.046144ms to configureAuth
	I0729 14:28:55.431854 1033083 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:28:55.432088 1033083 config.go:182] Loaded profile config "old-k8s-version-360866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 14:28:55.432202 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:28:55.435653 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:55.436053 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:28:47 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:28:55.436119 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:55.436303 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:28:55.436545 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:28:55.436800 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:28:55.436971 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:28:55.437196 1033083 main.go:141] libmachine: Using SSH client type: native
	I0729 14:28:55.437411 1033083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:28:55.437441 1033083 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:28:55.747185 1033083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:28:55.747237 1033083 main.go:141] libmachine: Checking connection to Docker...
	I0729 14:28:55.747251 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetURL
	I0729 14:28:55.748709 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | Using libvirt version 6000000
	I0729 14:28:55.751287 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:55.751816 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:28:47 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:28:55.751847 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:55.752045 1033083 main.go:141] libmachine: Docker is up and running!
	I0729 14:28:55.752060 1033083 main.go:141] libmachine: Reticulating splines...
	I0729 14:28:55.752068 1033083 client.go:171] duration metric: took 24.358881527s to LocalClient.Create
	I0729 14:28:55.752097 1033083 start.go:167] duration metric: took 24.358953487s to libmachine.API.Create "old-k8s-version-360866"
	I0729 14:28:55.752110 1033083 start.go:293] postStartSetup for "old-k8s-version-360866" (driver="kvm2")
	I0729 14:28:55.752124 1033083 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:28:55.752142 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:28:55.752457 1033083 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:28:55.752491 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:28:55.755010 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:55.755458 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:28:47 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:28:55.755491 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:55.755656 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:28:55.755842 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:28:55.756027 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:28:55.756242 1033083 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:28:55.840838 1033083 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:28:55.845960 1033083 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:28:55.845992 1033083 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:28:55.846062 1033083 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:28:55.846174 1033083 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:28:55.846279 1033083 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:28:55.858605 1033083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:28:55.888120 1033083 start.go:296] duration metric: took 135.989345ms for postStartSetup
	I0729 14:28:55.888186 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetConfigRaw
	I0729 14:28:55.889170 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:28:55.892860 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:55.893331 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:28:47 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:28:55.893362 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:55.893703 1033083 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/config.json ...
	I0729 14:28:55.893946 1033083 start.go:128] duration metric: took 24.522820544s to createHost
	I0729 14:28:55.894000 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:28:55.897070 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:55.897467 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:28:47 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:28:55.897500 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:55.897704 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:28:55.897938 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:28:55.898151 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:28:55.898335 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:28:55.898484 1033083 main.go:141] libmachine: Using SSH client type: native
	I0729 14:28:55.898691 1033083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:28:55.898719 1033083 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 14:28:56.015142 1033083 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263335.995115949
	
	I0729 14:28:56.015171 1033083 fix.go:216] guest clock: 1722263335.995115949
	I0729 14:28:56.015180 1033083 fix.go:229] Guest: 2024-07-29 14:28:55.995115949 +0000 UTC Remote: 2024-07-29 14:28:55.893974006 +0000 UTC m=+24.670652751 (delta=101.141943ms)
	I0729 14:28:56.015210 1033083 fix.go:200] guest clock delta is within tolerance: 101.141943ms
	I0729 14:28:56.015217 1033083 start.go:83] releasing machines lock for "old-k8s-version-360866", held for 24.644202858s
	I0729 14:28:56.015248 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:28:56.015534 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:28:56.018706 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:56.019185 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:28:47 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:28:56.019217 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:56.019429 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:28:56.020020 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:28:56.020260 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:28:56.020378 1033083 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:28:56.020439 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:28:56.020754 1033083 ssh_runner.go:195] Run: cat /version.json
	I0729 14:28:56.020784 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:28:56.023662 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:56.023983 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:56.024079 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:28:47 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:28:56.024110 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:56.024461 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:28:47 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:28:56.024486 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:56.024498 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:28:56.024731 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:28:56.024781 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:28:56.025020 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:28:56.025060 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:28:56.025231 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:28:56.025226 1033083 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:28:56.025677 1033083 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:28:56.142217 1033083 ssh_runner.go:195] Run: systemctl --version
	I0729 14:28:56.151912 1033083 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:28:56.334856 1033083 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:28:56.344077 1033083 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:28:56.344178 1033083 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:28:56.363476 1033083 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:28:56.363506 1033083 start.go:495] detecting cgroup driver to use...
	I0729 14:28:56.363580 1033083 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:28:56.384876 1033083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:28:56.401968 1033083 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:28:56.402085 1033083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:28:56.422352 1033083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:28:56.441341 1033083 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:28:56.596194 1033083 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:28:56.778640 1033083 docker.go:233] disabling docker service ...
	I0729 14:28:56.778732 1033083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:28:56.798479 1033083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:28:56.815936 1033083 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:28:56.946778 1033083 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:28:57.076528 1033083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:28:57.096064 1033083 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:28:57.116883 1033083 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 14:28:57.116955 1033083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:28:57.130346 1033083 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:28:57.130442 1033083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:28:57.144906 1033083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:28:57.160964 1033083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:28:57.176326 1033083 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:28:57.189969 1033083 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:28:57.201105 1033083 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:28:57.201180 1033083 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:28:57.218814 1033083 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:28:57.232325 1033083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:28:57.364202 1033083 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:28:57.527340 1033083 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:28:57.527439 1033083 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:28:57.533358 1033083 start.go:563] Will wait 60s for crictl version
	I0729 14:28:57.533418 1033083 ssh_runner.go:195] Run: which crictl
	I0729 14:28:57.537637 1033083 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:28:57.585285 1033083 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:28:57.585377 1033083 ssh_runner.go:195] Run: crio --version
	I0729 14:28:57.617507 1033083 ssh_runner.go:195] Run: crio --version
	I0729 14:28:57.651360 1033083 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 14:28:57.652686 1033083 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:28:57.656645 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:57.657061 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:28:47 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:28:57.657097 1033083 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:28:57.657392 1033083 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 14:28:57.661837 1033083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:28:57.677914 1033083 kubeadm.go:883] updating cluster {Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:28:57.678020 1033083 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 14:28:57.678071 1033083 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:28:57.712417 1033083 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 14:28:57.712527 1033083 ssh_runner.go:195] Run: which lz4
	I0729 14:28:57.716787 1033083 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 14:28:57.721164 1033083 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 14:28:57.721197 1033083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 14:28:59.430984 1033083 crio.go:462] duration metric: took 1.714228279s to copy over tarball
	I0729 14:28:59.431063 1033083 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 14:29:02.448086 1033083 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.016981225s)
	I0729 14:29:02.448123 1033083 crio.go:469] duration metric: took 3.017108243s to extract the tarball
	I0729 14:29:02.448133 1033083 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 14:29:02.492298 1033083 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:29:02.545251 1033083 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 14:29:02.545289 1033083 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 14:29:02.545382 1033083 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:29:02.545457 1033083 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 14:29:02.545497 1033083 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:29:02.545715 1033083 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:29:02.545729 1033083 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 14:29:02.545735 1033083 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 14:29:02.545715 1033083 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:29:02.545390 1033083 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:29:02.554171 1033083 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:29:02.554200 1033083 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:29:02.554265 1033083 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:29:02.554367 1033083 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:29:02.554402 1033083 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 14:29:02.554534 1033083 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 14:29:02.554536 1033083 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 14:29:02.554579 1033083 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:29:02.726407 1033083 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:29:02.728856 1033083 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 14:29:02.729442 1033083 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:29:02.733189 1033083 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:29:02.745554 1033083 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 14:29:02.755700 1033083 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:29:02.789406 1033083 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 14:29:02.847803 1033083 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 14:29:02.847884 1033083 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:29:02.847943 1033083 ssh_runner.go:195] Run: which crictl
	I0729 14:29:02.850146 1033083 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:29:02.904310 1033083 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 14:29:02.904375 1033083 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 14:29:02.904442 1033083 ssh_runner.go:195] Run: which crictl
	I0729 14:29:02.924597 1033083 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 14:29:02.924664 1033083 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:29:02.924728 1033083 ssh_runner.go:195] Run: which crictl
	I0729 14:29:02.924616 1033083 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 14:29:02.924816 1033083 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:29:02.924846 1033083 ssh_runner.go:195] Run: which crictl
	I0729 14:29:02.945314 1033083 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 14:29:02.945370 1033083 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 14:29:02.945509 1033083 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 14:29:02.945533 1033083 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:29:02.945592 1033083 ssh_runner.go:195] Run: which crictl
	I0729 14:29:02.945748 1033083 ssh_runner.go:195] Run: which crictl
	I0729 14:29:02.958980 1033083 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:29:02.959002 1033083 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 14:29:02.959039 1033083 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 14:29:02.959078 1033083 ssh_runner.go:195] Run: which crictl
	I0729 14:29:03.074412 1033083 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 14:29:03.074444 1033083 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:29:03.074480 1033083 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:29:03.074534 1033083 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:29:03.074538 1033083 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 14:29:03.074593 1033083 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 14:29:03.074623 1033083 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 14:29:03.218314 1033083 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 14:29:03.218330 1033083 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 14:29:03.218392 1033083 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 14:29:03.218444 1033083 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 14:29:03.218482 1033083 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 14:29:03.218504 1033083 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 14:29:03.218547 1033083 cache_images.go:92] duration metric: took 673.236936ms to LoadCachedImages
	W0729 14:29:03.218630 1033083 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0729 14:29:03.218646 1033083 kubeadm.go:934] updating node { 192.168.39.71 8443 v1.20.0 crio true true} ...
	I0729 14:29:03.218782 1033083 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-360866 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:29:03.218862 1033083 ssh_runner.go:195] Run: crio config
	I0729 14:29:03.289105 1033083 cni.go:84] Creating CNI manager for ""
	I0729 14:29:03.289127 1033083 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:29:03.289139 1033083 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:29:03.289165 1033083 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.71 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-360866 NodeName:old-k8s-version-360866 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.71"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.71 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 14:29:03.289356 1033083 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.71
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-360866"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.71
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.71"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:29:03.289435 1033083 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 14:29:03.303654 1033083 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:29:03.303732 1033083 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:29:03.316796 1033083 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 14:29:03.338085 1033083 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 14:29:03.360446 1033083 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 14:29:03.381660 1033083 ssh_runner.go:195] Run: grep 192.168.39.71	control-plane.minikube.internal$ /etc/hosts
	I0729 14:29:03.386287 1033083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.71	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:29:03.402580 1033083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:29:03.531845 1033083 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:29:03.550412 1033083 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866 for IP: 192.168.39.71
	I0729 14:29:03.550443 1033083 certs.go:194] generating shared ca certs ...
	I0729 14:29:03.550467 1033083 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:29:03.550655 1033083 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:29:03.550692 1033083 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:29:03.550701 1033083 certs.go:256] generating profile certs ...
	I0729 14:29:03.550754 1033083 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/client.key
	I0729 14:29:03.550767 1033083 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/client.crt with IP's: []
	I0729 14:29:03.933720 1033083 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/client.crt ...
	I0729 14:29:03.933759 1033083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/client.crt: {Name:mk3d8bc88a0ce20bcfa5404a08a65e85db78e645 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:29:03.964804 1033083 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/client.key ...
	I0729 14:29:03.964868 1033083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/client.key: {Name:mk48bc2e2ebae699e7ba4e77d49d00b907178223 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:29:03.994138 1033083 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.key.98c2aed0
	I0729 14:29:03.994199 1033083 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.crt.98c2aed0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.71]
	I0729 14:29:04.252140 1033083 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.crt.98c2aed0 ...
	I0729 14:29:04.252173 1033083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.crt.98c2aed0: {Name:mk12d5daabd152d70af6ca1d3212809f89918486 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:29:04.252930 1033083 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.key.98c2aed0 ...
	I0729 14:29:04.252963 1033083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.key.98c2aed0: {Name:mk34ecd0a48f8d96bab30ff70e0599d475d2a15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:29:04.264062 1033083 certs.go:381] copying /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.crt.98c2aed0 -> /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.crt
	I0729 14:29:04.264206 1033083 certs.go:385] copying /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.key.98c2aed0 -> /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.key
	I0729 14:29:04.264290 1033083 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.key
	I0729 14:29:04.264309 1033083 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.crt with IP's: []
	I0729 14:29:04.396655 1033083 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.crt ...
	I0729 14:29:04.396697 1033083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.crt: {Name:mkeb19120719e624a67a1782b8bf1fea63a4166b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:29:04.397463 1033083 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.key ...
	I0729 14:29:04.397491 1033083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.key: {Name:mk27da796653fe49cc3ad4d61719a18ae344d462 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:29:04.405917 1033083 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:29:04.405995 1033083 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:29:04.406007 1033083 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:29:04.406036 1033083 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:29:04.406064 1033083 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:29:04.406090 1033083 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:29:04.406142 1033083 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:29:04.406951 1033083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:29:04.439141 1033083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:29:04.473037 1033083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:29:04.507087 1033083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:29:04.538985 1033083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 14:29:04.615373 1033083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 14:29:04.665658 1033083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:29:04.691890 1033083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 14:29:04.717016 1033083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:29:04.743019 1033083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:29:04.768344 1033083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:29:04.792612 1033083 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:29:04.810340 1033083 ssh_runner.go:195] Run: openssl version
	I0729 14:29:04.817316 1033083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:29:04.828774 1033083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:29:04.833568 1033083 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:29:04.833630 1033083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:29:04.840263 1033083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:29:04.851846 1033083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:29:04.863293 1033083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:29:04.867773 1033083 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:29:04.867832 1033083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:29:04.873555 1033083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:29:04.884556 1033083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:29:04.895309 1033083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:29:04.899781 1033083 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:29:04.899829 1033083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:29:04.906286 1033083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:29:04.917685 1033083 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:29:04.921845 1033083 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 14:29:04.921897 1033083 kubeadm.go:392] StartCluster: {Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:29:04.921971 1033083 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:29:04.922048 1033083 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:29:04.962995 1033083 cri.go:89] found id: ""
	I0729 14:29:04.963071 1033083 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:29:04.974907 1033083 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:29:04.984817 1033083 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:29:04.994117 1033083 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:29:04.994149 1033083 kubeadm.go:157] found existing configuration files:
	
	I0729 14:29:04.994202 1033083 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:29:05.004445 1033083 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:29:05.004538 1033083 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:29:05.014500 1033083 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:29:05.024243 1033083 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:29:05.024301 1033083 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:29:05.033935 1033083 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:29:05.046745 1033083 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:29:05.046836 1033083 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:29:05.059294 1033083 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:29:05.068570 1033083 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:29:05.068623 1033083 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:29:05.080920 1033083 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:29:05.371363 1033083 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 14:29:05.371430 1033083 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:29:05.576570 1033083 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:29:05.576735 1033083 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:29:05.576867 1033083 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 14:29:05.804009 1033083 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:29:05.806566 1033083 out.go:204]   - Generating certificates and keys ...
	I0729 14:29:05.806690 1033083 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:29:05.806817 1033083 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:29:06.113559 1033083 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 14:29:06.351141 1033083 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 14:29:06.625452 1033083 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 14:29:06.783650 1033083 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 14:29:07.186617 1033083 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 14:29:07.186993 1033083 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-360866] and IPs [192.168.39.71 127.0.0.1 ::1]
	I0729 14:29:07.336334 1033083 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 14:29:07.336610 1033083 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-360866] and IPs [192.168.39.71 127.0.0.1 ::1]
	I0729 14:29:07.726254 1033083 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 14:29:07.865618 1033083 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 14:29:08.162756 1033083 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 14:29:08.163277 1033083 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:29:08.403171 1033083 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:29:08.547258 1033083 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:29:09.171169 1033083 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:29:09.471313 1033083 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:29:09.506761 1033083 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:29:09.506952 1033083 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:29:09.507035 1033083 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:29:09.692137 1033083 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:29:09.694565 1033083 out.go:204]   - Booting up control plane ...
	I0729 14:29:09.694686 1033083 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:29:09.709259 1033083 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:29:09.710745 1033083 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:29:09.713841 1033083 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:29:09.718022 1033083 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 14:29:49.715836 1033083 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 14:29:49.716775 1033083 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:29:49.717043 1033083 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:29:54.717379 1033083 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:29:54.717610 1033083 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:30:04.718558 1033083 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:30:04.718819 1033083 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:30:24.720436 1033083 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:30:24.720666 1033083 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:31:04.720245 1033083 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:31:04.720801 1033083 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:31:04.720975 1033083 kubeadm.go:310] 
	I0729 14:31:04.721053 1033083 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 14:31:04.721139 1033083 kubeadm.go:310] 		timed out waiting for the condition
	I0729 14:31:04.721147 1033083 kubeadm.go:310] 
	I0729 14:31:04.721224 1033083 kubeadm.go:310] 	This error is likely caused by:
	I0729 14:31:04.721325 1033083 kubeadm.go:310] 		- The kubelet is not running
	I0729 14:31:04.721586 1033083 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 14:31:04.721603 1033083 kubeadm.go:310] 
	I0729 14:31:04.721802 1033083 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 14:31:04.721881 1033083 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 14:31:04.721955 1033083 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 14:31:04.721964 1033083 kubeadm.go:310] 
	I0729 14:31:04.722215 1033083 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 14:31:04.722454 1033083 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 14:31:04.722482 1033083 kubeadm.go:310] 
	I0729 14:31:04.722735 1033083 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 14:31:04.723087 1033083 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 14:31:04.723367 1033083 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 14:31:04.723568 1033083 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 14:31:04.723606 1033083 kubeadm.go:310] 
	I0729 14:31:04.724134 1033083 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:31:04.724305 1033083 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 14:31:04.724520 1033083 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 14:31:04.724607 1033083 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-360866] and IPs [192.168.39.71 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-360866] and IPs [192.168.39.71 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-360866] and IPs [192.168.39.71 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-360866] and IPs [192.168.39.71 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 14:31:04.724677 1033083 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:31:06.596801 1033083 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.87209194s)
	I0729 14:31:06.596881 1033083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:31:06.612255 1033083 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:31:06.622028 1033083 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:31:06.622049 1033083 kubeadm.go:157] found existing configuration files:
	
	I0729 14:31:06.622091 1033083 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:31:06.631717 1033083 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:31:06.631784 1033083 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:31:06.642243 1033083 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:31:06.651613 1033083 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:31:06.651681 1033083 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:31:06.661173 1033083 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:31:06.670564 1033083 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:31:06.670619 1033083 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:31:06.679691 1033083 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:31:06.689706 1033083 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:31:06.689761 1033083 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:31:06.698739 1033083 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:31:06.766250 1033083 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 14:31:06.766334 1033083 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:31:06.927966 1033083 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:31:06.928067 1033083 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:31:06.928198 1033083 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 14:31:07.112943 1033083 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:31:07.114783 1033083 out.go:204]   - Generating certificates and keys ...
	I0729 14:31:07.114894 1033083 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:31:07.114990 1033083 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:31:07.115097 1033083 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:31:07.115177 1033083 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:31:07.115276 1033083 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:31:07.115350 1033083 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:31:07.115676 1033083 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:31:07.116032 1033083 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:31:07.116473 1033083 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:31:07.116865 1033083 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:31:07.116904 1033083 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:31:07.116985 1033083 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:31:07.316152 1033083 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:31:07.401859 1033083 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:31:07.552704 1033083 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:31:07.600641 1033083 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:31:07.617139 1033083 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:31:07.618488 1033083 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:31:07.618610 1033083 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:31:07.801023 1033083 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:31:07.802902 1033083 out.go:204]   - Booting up control plane ...
	I0729 14:31:07.803029 1033083 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:31:07.820035 1033083 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:31:07.821337 1033083 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:31:07.822102 1033083 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:31:07.824291 1033083 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 14:31:47.826055 1033083 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 14:31:47.826488 1033083 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:31:47.826705 1033083 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:31:52.827162 1033083 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:31:52.827378 1033083 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:32:02.827949 1033083 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:32:02.828177 1033083 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:32:22.829215 1033083 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:32:22.829445 1033083 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:33:02.829424 1033083 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:33:02.829626 1033083 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:33:02.829650 1033083 kubeadm.go:310] 
	I0729 14:33:02.829697 1033083 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 14:33:02.829750 1033083 kubeadm.go:310] 		timed out waiting for the condition
	I0729 14:33:02.829766 1033083 kubeadm.go:310] 
	I0729 14:33:02.829795 1033083 kubeadm.go:310] 	This error is likely caused by:
	I0729 14:33:02.829845 1033083 kubeadm.go:310] 		- The kubelet is not running
	I0729 14:33:02.829955 1033083 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 14:33:02.829971 1033083 kubeadm.go:310] 
	I0729 14:33:02.830079 1033083 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 14:33:02.830114 1033083 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 14:33:02.830143 1033083 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 14:33:02.830149 1033083 kubeadm.go:310] 
	I0729 14:33:02.830240 1033083 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 14:33:02.830327 1033083 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 14:33:02.830333 1033083 kubeadm.go:310] 
	I0729 14:33:02.830462 1033083 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 14:33:02.830537 1033083 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 14:33:02.830623 1033083 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 14:33:02.830722 1033083 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 14:33:02.830735 1033083 kubeadm.go:310] 
	I0729 14:33:02.831313 1033083 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:33:02.831420 1033083 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 14:33:02.831505 1033083 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 14:33:02.831592 1033083 kubeadm.go:394] duration metric: took 3m57.90970003s to StartCluster
	I0729 14:33:02.831651 1033083 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:33:02.831722 1033083 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:33:02.869996 1033083 cri.go:89] found id: ""
	I0729 14:33:02.870032 1033083 logs.go:276] 0 containers: []
	W0729 14:33:02.870043 1033083 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:33:02.870051 1033083 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:33:02.870134 1033083 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:33:02.903198 1033083 cri.go:89] found id: ""
	I0729 14:33:02.903232 1033083 logs.go:276] 0 containers: []
	W0729 14:33:02.903244 1033083 logs.go:278] No container was found matching "etcd"
	I0729 14:33:02.903253 1033083 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:33:02.903325 1033083 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:33:02.934359 1033083 cri.go:89] found id: ""
	I0729 14:33:02.934386 1033083 logs.go:276] 0 containers: []
	W0729 14:33:02.934394 1033083 logs.go:278] No container was found matching "coredns"
	I0729 14:33:02.934400 1033083 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:33:02.934463 1033083 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:33:02.968874 1033083 cri.go:89] found id: ""
	I0729 14:33:02.968910 1033083 logs.go:276] 0 containers: []
	W0729 14:33:02.968921 1033083 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:33:02.968929 1033083 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:33:02.968999 1033083 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:33:02.999861 1033083 cri.go:89] found id: ""
	I0729 14:33:02.999893 1033083 logs.go:276] 0 containers: []
	W0729 14:33:02.999901 1033083 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:33:02.999907 1033083 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:33:02.999967 1033083 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:33:03.032951 1033083 cri.go:89] found id: ""
	I0729 14:33:03.032981 1033083 logs.go:276] 0 containers: []
	W0729 14:33:03.032989 1033083 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:33:03.032995 1033083 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:33:03.033043 1033083 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:33:03.066037 1033083 cri.go:89] found id: ""
	I0729 14:33:03.066060 1033083 logs.go:276] 0 containers: []
	W0729 14:33:03.066068 1033083 logs.go:278] No container was found matching "kindnet"
	I0729 14:33:03.066079 1033083 logs.go:123] Gathering logs for kubelet ...
	I0729 14:33:03.066097 1033083 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:33:03.116438 1033083 logs.go:123] Gathering logs for dmesg ...
	I0729 14:33:03.116476 1033083 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:33:03.130142 1033083 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:33:03.130174 1033083 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:33:03.243204 1033083 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:33:03.243226 1033083 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:33:03.243241 1033083 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:33:03.337978 1033083 logs.go:123] Gathering logs for container status ...
	I0729 14:33:03.338018 1033083 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0729 14:33:03.377862 1033083 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 14:33:03.377915 1033083 out.go:239] * 
	* 
	W0729 14:33:03.377989 1033083 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 14:33:03.378015 1033083 out.go:239] * 
	* 
	W0729 14:33:03.378868 1033083 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 14:33:03.382131 1033083 out.go:177] 
	W0729 14:33:03.383439 1033083 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 14:33:03.383497 1033083 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 14:33:03.383530 1033083 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 14:33:03.385042 1033083 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-360866 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-360866 -n old-k8s-version-360866
E0729 14:33:03.625283  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/calico-513289/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-360866 -n old-k8s-version-360866: exit status 6 (229.445569ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 14:33:03.656225 1038817 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-360866" does not appear in /home/jenkins/minikube-integration/19338-974764/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-360866" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (272.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-603534 --alsologtostderr -v=3
E0729 14:30:31.596304  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/auto-513289/client.crt: no such file or directory
E0729 14:30:31.601647  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/auto-513289/client.crt: no such file or directory
E0729 14:30:31.611896  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/auto-513289/client.crt: no such file or directory
E0729 14:30:31.632217  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/auto-513289/client.crt: no such file or directory
E0729 14:30:31.672533  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/auto-513289/client.crt: no such file or directory
E0729 14:30:31.752982  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/auto-513289/client.crt: no such file or directory
E0729 14:30:31.913407  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/auto-513289/client.crt: no such file or directory
E0729 14:30:32.234053  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/auto-513289/client.crt: no such file or directory
E0729 14:30:32.875079  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/auto-513289/client.crt: no such file or directory
E0729 14:30:34.155646  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/auto-513289/client.crt: no such file or directory
E0729 14:30:36.715787  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/auto-513289/client.crt: no such file or directory
E0729 14:30:41.837020  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/auto-513289/client.crt: no such file or directory
E0729 14:30:52.077201  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/auto-513289/client.crt: no such file or directory
E0729 14:30:53.713335  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
E0729 14:31:10.159620  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kindnet-513289/client.crt: no such file or directory
E0729 14:31:10.164897  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kindnet-513289/client.crt: no such file or directory
E0729 14:31:10.175196  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kindnet-513289/client.crt: no such file or directory
E0729 14:31:10.195503  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kindnet-513289/client.crt: no such file or directory
E0729 14:31:10.235848  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kindnet-513289/client.crt: no such file or directory
E0729 14:31:10.316196  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kindnet-513289/client.crt: no such file or directory
E0729 14:31:10.476980  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kindnet-513289/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-603534 --alsologtostderr -v=3: exit status 82 (2m0.767285735s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-603534"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 14:30:24.974612 1037807 out.go:291] Setting OutFile to fd 1 ...
	I0729 14:30:24.974736 1037807 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:30:24.974743 1037807 out.go:304] Setting ErrFile to fd 2...
	I0729 14:30:24.974747 1037807 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:30:24.974938 1037807 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 14:30:24.975258 1037807 out.go:298] Setting JSON to false
	I0729 14:30:24.975356 1037807 mustload.go:65] Loading cluster: no-preload-603534
	I0729 14:30:24.975683 1037807 config.go:182] Loaded profile config "no-preload-603534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 14:30:24.975756 1037807 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/config.json ...
	I0729 14:30:24.975960 1037807 mustload.go:65] Loading cluster: no-preload-603534
	I0729 14:30:24.976072 1037807 config.go:182] Loaded profile config "no-preload-603534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 14:30:24.976106 1037807 stop.go:39] StopHost: no-preload-603534
	I0729 14:30:24.976673 1037807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:30:24.976723 1037807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:30:24.993468 1037807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37517
	I0729 14:30:24.993994 1037807 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:30:24.994734 1037807 main.go:141] libmachine: Using API Version  1
	I0729 14:30:24.994760 1037807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:30:24.995146 1037807 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:30:24.997661 1037807 out.go:177] * Stopping node "no-preload-603534"  ...
	I0729 14:30:24.999414 1037807 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 14:30:24.999469 1037807 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:30:24.999758 1037807 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 14:30:24.999788 1037807 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:30:25.003611 1037807 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:30:25.004016 1037807 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:29:12 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:30:25.004046 1037807 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:30:25.004312 1037807 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:30:25.004525 1037807 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:30:25.004712 1037807 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:30:25.004852 1037807 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:30:25.122455 1037807 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 14:30:25.180209 1037807 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 14:30:25.243027 1037807 main.go:141] libmachine: Stopping "no-preload-603534"...
	I0729 14:30:25.243062 1037807 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:30:25.244883 1037807 main.go:141] libmachine: (no-preload-603534) Calling .Stop
	I0729 14:30:25.248777 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 0/120
	I0729 14:30:26.250887 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 1/120
	I0729 14:30:27.252317 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 2/120
	I0729 14:30:28.253646 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 3/120
	I0729 14:30:29.254997 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 4/120
	I0729 14:30:30.256892 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 5/120
	I0729 14:30:31.258275 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 6/120
	I0729 14:30:32.260073 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 7/120
	I0729 14:30:33.261831 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 8/120
	I0729 14:30:34.263293 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 9/120
	I0729 14:30:35.265663 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 10/120
	I0729 14:30:36.266921 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 11/120
	I0729 14:30:37.268541 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 12/120
	I0729 14:30:38.270010 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 13/120
	I0729 14:30:39.271901 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 14/120
	I0729 14:30:40.273962 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 15/120
	I0729 14:30:41.275306 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 16/120
	I0729 14:30:42.276647 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 17/120
	I0729 14:30:43.278182 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 18/120
	I0729 14:30:44.279630 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 19/120
	I0729 14:30:45.281633 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 20/120
	I0729 14:30:46.283537 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 21/120
	I0729 14:30:47.284914 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 22/120
	I0729 14:30:48.286828 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 23/120
	I0729 14:30:49.288960 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 24/120
	I0729 14:30:50.290898 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 25/120
	I0729 14:30:51.292288 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 26/120
	I0729 14:30:52.293619 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 27/120
	I0729 14:30:53.294881 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 28/120
	I0729 14:30:54.296349 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 29/120
	I0729 14:30:55.298512 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 30/120
	I0729 14:30:56.299949 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 31/120
	I0729 14:30:57.302221 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 32/120
	I0729 14:30:58.304228 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 33/120
	I0729 14:30:59.305693 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 34/120
	I0729 14:31:00.307927 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 35/120
	I0729 14:31:01.309348 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 36/120
	I0729 14:31:02.310860 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 37/120
	I0729 14:31:03.312229 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 38/120
	I0729 14:31:04.314481 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 39/120
	I0729 14:31:05.316798 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 40/120
	I0729 14:31:06.318157 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 41/120
	I0729 14:31:07.319449 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 42/120
	I0729 14:31:08.321246 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 43/120
	I0729 14:31:09.323022 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 44/120
	I0729 14:31:10.324513 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 45/120
	I0729 14:31:11.326026 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 46/120
	I0729 14:31:12.327573 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 47/120
	I0729 14:31:13.328966 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 48/120
	I0729 14:31:14.331189 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 49/120
	I0729 14:31:15.333169 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 50/120
	I0729 14:31:16.334916 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 51/120
	I0729 14:31:17.336512 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 52/120
	I0729 14:31:18.337940 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 53/120
	I0729 14:31:19.339337 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 54/120
	I0729 14:31:20.341428 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 55/120
	I0729 14:31:21.343534 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 56/120
	I0729 14:31:22.344806 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 57/120
	I0729 14:31:23.346013 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 58/120
	I0729 14:31:24.347462 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 59/120
	I0729 14:31:25.349816 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 60/120
	I0729 14:31:26.352267 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 61/120
	I0729 14:31:27.353704 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 62/120
	I0729 14:31:28.355197 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 63/120
	I0729 14:31:29.356568 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 64/120
	I0729 14:31:30.358550 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 65/120
	I0729 14:31:31.359809 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 66/120
	I0729 14:31:32.361250 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 67/120
	I0729 14:31:33.362740 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 68/120
	I0729 14:31:34.364573 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 69/120
	I0729 14:31:35.366729 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 70/120
	I0729 14:31:36.368055 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 71/120
	I0729 14:31:37.369468 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 72/120
	I0729 14:31:38.370783 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 73/120
	I0729 14:31:39.372010 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 74/120
	I0729 14:31:40.373818 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 75/120
	I0729 14:31:41.375260 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 76/120
	I0729 14:31:42.376690 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 77/120
	I0729 14:31:43.377984 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 78/120
	I0729 14:31:44.379450 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 79/120
	I0729 14:31:45.381624 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 80/120
	I0729 14:31:46.382792 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 81/120
	I0729 14:31:47.383943 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 82/120
	I0729 14:31:48.385259 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 83/120
	I0729 14:31:49.386509 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 84/120
	I0729 14:31:50.388459 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 85/120
	I0729 14:31:51.389685 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 86/120
	I0729 14:31:52.390980 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 87/120
	I0729 14:31:53.392288 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 88/120
	I0729 14:31:54.393625 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 89/120
	I0729 14:31:55.395653 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 90/120
	I0729 14:31:56.397100 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 91/120
	I0729 14:31:57.398311 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 92/120
	I0729 14:31:58.399706 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 93/120
	I0729 14:31:59.401151 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 94/120
	I0729 14:32:00.403004 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 95/120
	I0729 14:32:01.404451 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 96/120
	I0729 14:32:02.405802 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 97/120
	I0729 14:32:03.407296 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 98/120
	I0729 14:32:04.408515 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 99/120
	I0729 14:32:05.410512 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 100/120
	I0729 14:32:06.411869 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 101/120
	I0729 14:32:07.413291 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 102/120
	I0729 14:32:08.414712 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 103/120
	I0729 14:32:09.416104 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 104/120
	I0729 14:32:10.418313 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 105/120
	I0729 14:32:11.419548 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 106/120
	I0729 14:32:12.421017 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 107/120
	I0729 14:32:13.422296 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 108/120
	I0729 14:32:14.423653 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 109/120
	I0729 14:32:15.426043 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 110/120
	I0729 14:32:16.427389 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 111/120
	I0729 14:32:17.428794 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 112/120
	I0729 14:32:18.430099 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 113/120
	I0729 14:32:19.431440 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 114/120
	I0729 14:32:20.433356 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 115/120
	I0729 14:32:21.434832 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 116/120
	I0729 14:32:22.436212 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 117/120
	I0729 14:32:23.437993 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 118/120
	I0729 14:32:24.439251 1037807 main.go:141] libmachine: (no-preload-603534) Waiting for machine to stop 119/120
	I0729 14:32:25.439833 1037807 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 14:32:25.439907 1037807 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 14:32:25.441701 1037807 out.go:177] 
	W0729 14:32:25.442885 1037807 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 14:32:25.442906 1037807 out.go:239] * 
	* 
	W0729 14:32:25.694438 1037807 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 14:32:25.696120 1037807 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-603534 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-603534 -n no-preload-603534
E0729 14:32:27.784212  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/calico-513289/client.crt: no such file or directory
E0729 14:32:32.082302  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kindnet-513289/client.crt: no such file or directory
E0729 14:32:32.904946  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/calico-513289/client.crt: no such file or directory
E0729 14:32:43.145142  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/calico-513289/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-603534 -n no-preload-603534: exit status 3 (18.531334411s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 14:32:44.228864 1038513 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.116:22: connect: no route to host
	E0729 14:32:44.228887 1038513 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.116:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-603534" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (138.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-668123 --alsologtostderr -v=3
E0729 14:31:20.400350  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kindnet-513289/client.crt: no such file or directory
E0729 14:31:30.641509  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kindnet-513289/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-668123 --alsologtostderr -v=3: exit status 82 (2m0.469362794s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-668123"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 14:31:19.908822 1038159 out.go:291] Setting OutFile to fd 1 ...
	I0729 14:31:19.909068 1038159 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:31:19.909078 1038159 out.go:304] Setting ErrFile to fd 2...
	I0729 14:31:19.909082 1038159 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:31:19.909247 1038159 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 14:31:19.909457 1038159 out.go:298] Setting JSON to false
	I0729 14:31:19.909529 1038159 mustload.go:65] Loading cluster: embed-certs-668123
	I0729 14:31:19.909845 1038159 config.go:182] Loaded profile config "embed-certs-668123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:31:19.909923 1038159 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/config.json ...
	I0729 14:31:19.910083 1038159 mustload.go:65] Loading cluster: embed-certs-668123
	I0729 14:31:19.910200 1038159 config.go:182] Loaded profile config "embed-certs-668123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:31:19.910244 1038159 stop.go:39] StopHost: embed-certs-668123
	I0729 14:31:19.910622 1038159 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:31:19.910668 1038159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:31:19.925356 1038159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34905
	I0729 14:31:19.925885 1038159 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:31:19.926571 1038159 main.go:141] libmachine: Using API Version  1
	I0729 14:31:19.926597 1038159 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:31:19.926955 1038159 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:31:19.929163 1038159 out.go:177] * Stopping node "embed-certs-668123"  ...
	I0729 14:31:19.930361 1038159 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 14:31:19.930387 1038159 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:31:19.930652 1038159 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 14:31:19.930696 1038159 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:31:19.933569 1038159 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:31:19.934041 1038159 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:29:46 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:31:19.934091 1038159 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:31:19.934265 1038159 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:31:19.934440 1038159 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:31:19.934610 1038159 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:31:19.934752 1038159 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:31:20.025934 1038159 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 14:31:20.085997 1038159 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 14:31:20.128142 1038159 main.go:141] libmachine: Stopping "embed-certs-668123"...
	I0729 14:31:20.128184 1038159 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:31:20.129932 1038159 main.go:141] libmachine: (embed-certs-668123) Calling .Stop
	I0729 14:31:20.133827 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 0/120
	I0729 14:31:21.135245 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 1/120
	I0729 14:31:22.136872 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 2/120
	I0729 14:31:23.138370 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 3/120
	I0729 14:31:24.139934 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 4/120
	I0729 14:31:25.142111 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 5/120
	I0729 14:31:26.143629 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 6/120
	I0729 14:31:27.145188 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 7/120
	I0729 14:31:28.146784 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 8/120
	I0729 14:31:29.148237 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 9/120
	I0729 14:31:30.149679 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 10/120
	I0729 14:31:31.151321 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 11/120
	I0729 14:31:32.152811 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 12/120
	I0729 14:31:33.154205 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 13/120
	I0729 14:31:34.155625 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 14/120
	I0729 14:31:35.157354 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 15/120
	I0729 14:31:36.158800 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 16/120
	I0729 14:31:37.160347 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 17/120
	I0729 14:31:38.161745 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 18/120
	I0729 14:31:39.162985 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 19/120
	I0729 14:31:40.164401 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 20/120
	I0729 14:31:41.165996 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 21/120
	I0729 14:31:42.167314 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 22/120
	I0729 14:31:43.168731 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 23/120
	I0729 14:31:44.170034 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 24/120
	I0729 14:31:45.171646 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 25/120
	I0729 14:31:46.172981 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 26/120
	I0729 14:31:47.174318 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 27/120
	I0729 14:31:48.175813 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 28/120
	I0729 14:31:49.177741 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 29/120
	I0729 14:31:50.179969 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 30/120
	I0729 14:31:51.181256 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 31/120
	I0729 14:31:52.182621 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 32/120
	I0729 14:31:53.183981 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 33/120
	I0729 14:31:54.185480 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 34/120
	I0729 14:31:55.187260 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 35/120
	I0729 14:31:56.188823 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 36/120
	I0729 14:31:57.190065 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 37/120
	I0729 14:31:58.191463 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 38/120
	I0729 14:31:59.192800 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 39/120
	I0729 14:32:00.195362 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 40/120
	I0729 14:32:01.196854 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 41/120
	I0729 14:32:02.198108 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 42/120
	I0729 14:32:03.199576 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 43/120
	I0729 14:32:04.200990 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 44/120
	I0729 14:32:05.203178 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 45/120
	I0729 14:32:06.204816 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 46/120
	I0729 14:32:07.206163 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 47/120
	I0729 14:32:08.207722 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 48/120
	I0729 14:32:09.209139 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 49/120
	I0729 14:32:10.211269 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 50/120
	I0729 14:32:11.212993 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 51/120
	I0729 14:32:12.214445 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 52/120
	I0729 14:32:13.215956 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 53/120
	I0729 14:32:14.217500 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 54/120
	I0729 14:32:15.219528 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 55/120
	I0729 14:32:16.220926 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 56/120
	I0729 14:32:17.222479 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 57/120
	I0729 14:32:18.223982 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 58/120
	I0729 14:32:19.225401 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 59/120
	I0729 14:32:20.227579 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 60/120
	I0729 14:32:21.229333 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 61/120
	I0729 14:32:22.230793 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 62/120
	I0729 14:32:23.232125 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 63/120
	I0729 14:32:24.233688 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 64/120
	I0729 14:32:25.235824 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 65/120
	I0729 14:32:26.237234 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 66/120
	I0729 14:32:27.238666 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 67/120
	I0729 14:32:28.240042 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 68/120
	I0729 14:32:29.241481 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 69/120
	I0729 14:32:30.243865 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 70/120
	I0729 14:32:31.245294 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 71/120
	I0729 14:32:32.246648 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 72/120
	I0729 14:32:33.248182 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 73/120
	I0729 14:32:34.249710 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 74/120
	I0729 14:32:35.252089 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 75/120
	I0729 14:32:36.253372 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 76/120
	I0729 14:32:37.254695 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 77/120
	I0729 14:32:38.256115 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 78/120
	I0729 14:32:39.257384 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 79/120
	I0729 14:32:40.259457 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 80/120
	I0729 14:32:41.260940 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 81/120
	I0729 14:32:42.262325 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 82/120
	I0729 14:32:43.263676 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 83/120
	I0729 14:32:44.265015 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 84/120
	I0729 14:32:45.266999 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 85/120
	I0729 14:32:46.268569 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 86/120
	I0729 14:32:47.269843 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 87/120
	I0729 14:32:48.271238 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 88/120
	I0729 14:32:49.272654 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 89/120
	I0729 14:32:50.274901 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 90/120
	I0729 14:32:51.276603 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 91/120
	I0729 14:32:52.277940 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 92/120
	I0729 14:32:53.279248 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 93/120
	I0729 14:32:54.280650 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 94/120
	I0729 14:32:55.282495 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 95/120
	I0729 14:32:56.283749 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 96/120
	I0729 14:32:57.285112 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 97/120
	I0729 14:32:58.286793 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 98/120
	I0729 14:32:59.288162 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 99/120
	I0729 14:33:00.290243 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 100/120
	I0729 14:33:01.291558 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 101/120
	I0729 14:33:02.292941 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 102/120
	I0729 14:33:03.294932 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 103/120
	I0729 14:33:04.296363 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 104/120
	I0729 14:33:05.298291 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 105/120
	I0729 14:33:06.299637 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 106/120
	I0729 14:33:07.301033 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 107/120
	I0729 14:33:08.302488 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 108/120
	I0729 14:33:09.303867 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 109/120
	I0729 14:33:10.305987 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 110/120
	I0729 14:33:11.307306 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 111/120
	I0729 14:33:12.308630 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 112/120
	I0729 14:33:13.310093 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 113/120
	I0729 14:33:14.311489 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 114/120
	I0729 14:33:15.313566 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 115/120
	I0729 14:33:16.314934 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 116/120
	I0729 14:33:17.316316 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 117/120
	I0729 14:33:18.317797 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 118/120
	I0729 14:33:19.319189 1038159 main.go:141] libmachine: (embed-certs-668123) Waiting for machine to stop 119/120
	I0729 14:33:20.320106 1038159 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 14:33:20.320163 1038159 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 14:33:20.322099 1038159 out.go:177] 
	W0729 14:33:20.323397 1038159 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 14:33:20.323414 1038159 out.go:239] * 
	* 
	W0729 14:33:20.327951 1038159 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 14:33:20.329227 1038159 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-668123 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-668123 -n embed-certs-668123
E0729 14:33:29.363221  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/enable-default-cni-513289/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-668123 -n embed-certs-668123: exit status 3 (18.425789268s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 14:33:38.756803 1038995 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.53:22: connect: no route to host
	E0729 14:33:38.756826 1038995 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.53:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-668123" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (138.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-751306 --alsologtostderr -v=3
E0729 14:31:51.121969  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kindnet-513289/client.crt: no such file or directory
E0729 14:31:53.518701  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/auto-513289/client.crt: no such file or directory
E0729 14:32:06.665368  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
E0729 14:32:22.664813  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/calico-513289/client.crt: no such file or directory
E0729 14:32:22.670069  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/calico-513289/client.crt: no such file or directory
E0729 14:32:22.680313  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/calico-513289/client.crt: no such file or directory
E0729 14:32:22.700592  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/calico-513289/client.crt: no such file or directory
E0729 14:32:22.740917  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/calico-513289/client.crt: no such file or directory
E0729 14:32:22.821297  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/calico-513289/client.crt: no such file or directory
E0729 14:32:22.981873  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/calico-513289/client.crt: no such file or directory
E0729 14:32:23.302406  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/calico-513289/client.crt: no such file or directory
E0729 14:32:23.942999  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/calico-513289/client.crt: no such file or directory
E0729 14:32:25.223570  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/calico-513289/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-751306 --alsologtostderr -v=3: exit status 82 (2m0.500077388s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-751306"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 14:31:40.952172 1038339 out.go:291] Setting OutFile to fd 1 ...
	I0729 14:31:40.952315 1038339 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:31:40.952325 1038339 out.go:304] Setting ErrFile to fd 2...
	I0729 14:31:40.952329 1038339 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:31:40.952561 1038339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 14:31:40.952832 1038339 out.go:298] Setting JSON to false
	I0729 14:31:40.952921 1038339 mustload.go:65] Loading cluster: default-k8s-diff-port-751306
	I0729 14:31:40.953312 1038339 config.go:182] Loaded profile config "default-k8s-diff-port-751306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:31:40.953396 1038339 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/config.json ...
	I0729 14:31:40.953588 1038339 mustload.go:65] Loading cluster: default-k8s-diff-port-751306
	I0729 14:31:40.953713 1038339 config.go:182] Loaded profile config "default-k8s-diff-port-751306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:31:40.953760 1038339 stop.go:39] StopHost: default-k8s-diff-port-751306
	I0729 14:31:40.954142 1038339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:31:40.954189 1038339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:31:40.970024 1038339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46711
	I0729 14:31:40.970504 1038339 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:31:40.971065 1038339 main.go:141] libmachine: Using API Version  1
	I0729 14:31:40.971085 1038339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:31:40.971456 1038339 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:31:40.973759 1038339 out.go:177] * Stopping node "default-k8s-diff-port-751306"  ...
	I0729 14:31:40.975138 1038339 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 14:31:40.975193 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:31:40.975454 1038339 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 14:31:40.975487 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:31:40.978454 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:31:40.978915 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:30:10 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:31:40.978946 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:31:40.979103 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:31:40.979286 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:31:40.979468 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:31:40.979664 1038339 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:31:41.083632 1038339 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 14:31:41.147243 1038339 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 14:31:41.211389 1038339 main.go:141] libmachine: Stopping "default-k8s-diff-port-751306"...
	I0729 14:31:41.211426 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:31:41.213002 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Stop
	I0729 14:31:41.216389 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 0/120
	I0729 14:31:42.217734 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 1/120
	I0729 14:31:43.218943 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 2/120
	I0729 14:31:44.220220 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 3/120
	I0729 14:31:45.221500 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 4/120
	I0729 14:31:46.223405 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 5/120
	I0729 14:31:47.224825 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 6/120
	I0729 14:31:48.226261 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 7/120
	I0729 14:31:49.227679 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 8/120
	I0729 14:31:50.229085 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 9/120
	I0729 14:31:51.231352 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 10/120
	I0729 14:31:52.232759 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 11/120
	I0729 14:31:53.234166 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 12/120
	I0729 14:31:54.235458 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 13/120
	I0729 14:31:55.236944 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 14/120
	I0729 14:31:56.238827 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 15/120
	I0729 14:31:57.239962 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 16/120
	I0729 14:31:58.241499 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 17/120
	I0729 14:31:59.242772 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 18/120
	I0729 14:32:00.244191 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 19/120
	I0729 14:32:01.246226 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 20/120
	I0729 14:32:02.247495 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 21/120
	I0729 14:32:03.248927 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 22/120
	I0729 14:32:04.250174 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 23/120
	I0729 14:32:05.251539 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 24/120
	I0729 14:32:06.253558 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 25/120
	I0729 14:32:07.254746 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 26/120
	I0729 14:32:08.256111 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 27/120
	I0729 14:32:09.257352 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 28/120
	I0729 14:32:10.258837 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 29/120
	I0729 14:32:11.261023 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 30/120
	I0729 14:32:12.262238 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 31/120
	I0729 14:32:13.263677 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 32/120
	I0729 14:32:14.265068 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 33/120
	I0729 14:32:15.266380 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 34/120
	I0729 14:32:16.268129 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 35/120
	I0729 14:32:17.269499 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 36/120
	I0729 14:32:18.270761 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 37/120
	I0729 14:32:19.271931 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 38/120
	I0729 14:32:20.273513 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 39/120
	I0729 14:32:21.275488 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 40/120
	I0729 14:32:22.276696 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 41/120
	I0729 14:32:23.278073 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 42/120
	I0729 14:32:24.279443 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 43/120
	I0729 14:32:25.281240 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 44/120
	I0729 14:32:26.283309 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 45/120
	I0729 14:32:27.284448 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 46/120
	I0729 14:32:28.285927 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 47/120
	I0729 14:32:29.287098 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 48/120
	I0729 14:32:30.288649 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 49/120
	I0729 14:32:31.290649 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 50/120
	I0729 14:32:32.291793 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 51/120
	I0729 14:32:33.293158 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 52/120
	I0729 14:32:34.294261 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 53/120
	I0729 14:32:35.295630 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 54/120
	I0729 14:32:36.297338 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 55/120
	I0729 14:32:37.298836 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 56/120
	I0729 14:32:38.299997 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 57/120
	I0729 14:32:39.301355 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 58/120
	I0729 14:32:40.302664 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 59/120
	I0729 14:32:41.304959 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 60/120
	I0729 14:32:42.306896 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 61/120
	I0729 14:32:43.308121 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 62/120
	I0729 14:32:44.310131 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 63/120
	I0729 14:32:45.311393 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 64/120
	I0729 14:32:46.313401 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 65/120
	I0729 14:32:47.314712 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 66/120
	I0729 14:32:48.316023 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 67/120
	I0729 14:32:49.317364 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 68/120
	I0729 14:32:50.318722 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 69/120
	I0729 14:32:51.320861 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 70/120
	I0729 14:32:52.321971 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 71/120
	I0729 14:32:53.323038 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 72/120
	I0729 14:32:54.324136 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 73/120
	I0729 14:32:55.325418 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 74/120
	I0729 14:32:56.327395 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 75/120
	I0729 14:32:57.329105 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 76/120
	I0729 14:32:58.330364 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 77/120
	I0729 14:32:59.331690 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 78/120
	I0729 14:33:00.333927 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 79/120
	I0729 14:33:01.336151 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 80/120
	I0729 14:33:02.337806 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 81/120
	I0729 14:33:03.339103 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 82/120
	I0729 14:33:04.340215 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 83/120
	I0729 14:33:05.341589 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 84/120
	I0729 14:33:06.343443 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 85/120
	I0729 14:33:07.344936 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 86/120
	I0729 14:33:08.346367 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 87/120
	I0729 14:33:09.347589 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 88/120
	I0729 14:33:10.348872 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 89/120
	I0729 14:33:11.351217 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 90/120
	I0729 14:33:12.352593 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 91/120
	I0729 14:33:13.353832 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 92/120
	I0729 14:33:14.355168 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 93/120
	I0729 14:33:15.356446 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 94/120
	I0729 14:33:16.358356 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 95/120
	I0729 14:33:17.359698 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 96/120
	I0729 14:33:18.360978 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 97/120
	I0729 14:33:19.362764 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 98/120
	I0729 14:33:20.363918 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 99/120
	I0729 14:33:21.366074 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 100/120
	I0729 14:33:22.367483 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 101/120
	I0729 14:33:23.368824 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 102/120
	I0729 14:33:24.370808 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 103/120
	I0729 14:33:25.372117 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 104/120
	I0729 14:33:26.374543 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 105/120
	I0729 14:33:27.376020 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 106/120
	I0729 14:33:28.377758 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 107/120
	I0729 14:33:29.379276 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 108/120
	I0729 14:33:30.380687 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 109/120
	I0729 14:33:31.383068 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 110/120
	I0729 14:33:32.384678 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 111/120
	I0729 14:33:33.386065 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 112/120
	I0729 14:33:34.387431 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 113/120
	I0729 14:33:35.389067 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 114/120
	I0729 14:33:36.391180 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 115/120
	I0729 14:33:37.392639 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 116/120
	I0729 14:33:38.394017 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 117/120
	I0729 14:33:39.395361 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 118/120
	I0729 14:33:40.396869 1038339 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for machine to stop 119/120
	I0729 14:33:41.397759 1038339 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 14:33:41.397843 1038339 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 14:33:41.399690 1038339 out.go:177] 
	W0729 14:33:41.401062 1038339 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 14:33:41.401087 1038339 out.go:239] * 
	* 
	W0729 14:33:41.405813 1038339 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 14:33:41.407286 1038339 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-751306 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-751306 -n default-k8s-diff-port-751306
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-751306 -n default-k8s-diff-port-751306: exit status 3 (18.595425955s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 14:34:00.004795 1039153 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.233:22: connect: no route to host
	E0729 14:34:00.004849 1039153 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.233:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-751306" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-603534 -n no-preload-603534
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-603534 -n no-preload-603534: exit status 3 (3.167731104s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 14:32:47.396821 1038625 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.116:22: connect: no route to host
	E0729 14:32:47.396841 1038625 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.116:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-603534 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-603534 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154515188s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.116:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-603534 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-603534 -n no-preload-603534
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-603534 -n no-preload-603534: exit status 3 (3.061318277s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 14:32:56.612882 1038712 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.116:22: connect: no route to host
	E0729 14:32:56.612926 1038712 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.116:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-603534" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-360866 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-360866 create -f testdata/busybox.yaml: exit status 1 (44.135096ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-360866" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-360866 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-360866 -n old-k8s-version-360866
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-360866 -n old-k8s-version-360866: exit status 6 (209.520828ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 14:33:03.910860 1038856 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-360866" does not appear in /home/jenkins/minikube-integration/19338-974764/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-360866" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-360866 -n old-k8s-version-360866
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-360866 -n old-k8s-version-360866: exit status 6 (211.795616ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 14:33:04.122879 1038886 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-360866" does not appear in /home/jenkins/minikube-integration/19338-974764/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-360866" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (107.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-360866 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0729 14:33:04.897102  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/custom-flannel-513289/client.crt: no such file or directory
E0729 14:33:08.882086  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/enable-default-cni-513289/client.crt: no such file or directory
E0729 14:33:08.887355  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/enable-default-cni-513289/client.crt: no such file or directory
E0729 14:33:08.897544  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/enable-default-cni-513289/client.crt: no such file or directory
E0729 14:33:08.917823  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/enable-default-cni-513289/client.crt: no such file or directory
E0729 14:33:08.958091  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/enable-default-cni-513289/client.crt: no such file or directory
E0729 14:33:09.038419  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/enable-default-cni-513289/client.crt: no such file or directory
E0729 14:33:09.198869  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/enable-default-cni-513289/client.crt: no such file or directory
E0729 14:33:09.520044  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/enable-default-cni-513289/client.crt: no such file or directory
E0729 14:33:10.017871  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/custom-flannel-513289/client.crt: no such file or directory
E0729 14:33:10.160482  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/enable-default-cni-513289/client.crt: no such file or directory
E0729 14:33:11.441610  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/enable-default-cni-513289/client.crt: no such file or directory
E0729 14:33:14.001850  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/enable-default-cni-513289/client.crt: no such file or directory
E0729 14:33:15.439894  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/auto-513289/client.crt: no such file or directory
E0729 14:33:19.122146  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/enable-default-cni-513289/client.crt: no such file or directory
E0729 14:33:20.258320  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/custom-flannel-513289/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-360866 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m46.949107499s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-360866 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-360866 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-360866 describe deploy/metrics-server -n kube-system: exit status 1 (46.414599ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-360866" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-360866 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-360866 -n old-k8s-version-360866
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-360866 -n old-k8s-version-360866: exit status 6 (220.691468ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 14:34:51.338276 1039637 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-360866" does not appear in /home/jenkins/minikube-integration/19338-974764/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-360866" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (107.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-668123 -n embed-certs-668123
E0729 14:33:40.738802  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/custom-flannel-513289/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-668123 -n embed-certs-668123: exit status 3 (3.167471959s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 14:33:41.924778 1039107 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.53:22: connect: no route to host
	E0729 14:33:41.924801 1039107 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.53:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-668123 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0729 14:33:44.585510  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/calico-513289/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-668123 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154371112s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.53:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-668123 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-668123 -n embed-certs-668123
E0729 14:33:49.843810  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/enable-default-cni-513289/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-668123 -n embed-certs-668123: exit status 3 (3.061256182s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 14:33:51.140812 1039217 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.53:22: connect: no route to host
	E0729 14:33:51.140831 1039217 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.53:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-668123" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-751306 -n default-k8s-diff-port-751306
E0729 14:34:00.040556  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/bridge-513289/client.crt: no such file or directory
E0729 14:34:00.045880  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/bridge-513289/client.crt: no such file or directory
E0729 14:34:00.056231  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/bridge-513289/client.crt: no such file or directory
E0729 14:34:00.076522  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/bridge-513289/client.crt: no such file or directory
E0729 14:34:00.116841  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/bridge-513289/client.crt: no such file or directory
E0729 14:34:00.197282  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/bridge-513289/client.crt: no such file or directory
E0729 14:34:00.357855  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/bridge-513289/client.crt: no such file or directory
E0729 14:34:00.678973  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/bridge-513289/client.crt: no such file or directory
E0729 14:34:00.771315  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/flannel-513289/client.crt: no such file or directory
E0729 14:34:01.320068  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/bridge-513289/client.crt: no such file or directory
E0729 14:34:02.600891  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/bridge-513289/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-751306 -n default-k8s-diff-port-751306: exit status 3 (3.167595939s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 14:34:03.172831 1039314 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.233:22: connect: no route to host
	E0729 14:34:03.172856 1039314 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.233:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-751306 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0729 14:34:05.161661  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/bridge-513289/client.crt: no such file or directory
E0729 14:34:05.891656  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/flannel-513289/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-751306 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155479806s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.233:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-751306 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-751306 -n default-k8s-diff-port-751306
E0729 14:34:10.281866  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/bridge-513289/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-751306 -n default-k8s-diff-port-751306: exit status 3 (3.060367982s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 14:34:12.388874 1039394 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.233:22: connect: no route to host
	E0729 14:34:12.388898 1039394 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.233:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-751306" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (707.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-360866 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0729 14:35:06.505791  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/calico-513289/client.crt: no such file or directory
E0729 14:35:17.574623  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/flannel-513289/client.crt: no such file or directory
E0729 14:35:21.964207  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/bridge-513289/client.crt: no such file or directory
E0729 14:35:31.595669  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/auto-513289/client.crt: no such file or directory
E0729 14:35:43.620090  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/custom-flannel-513289/client.crt: no such file or directory
E0729 14:35:52.724953  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/enable-default-cni-513289/client.crt: no such file or directory
E0729 14:35:59.280678  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/auto-513289/client.crt: no such file or directory
E0729 14:36:10.160019  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kindnet-513289/client.crt: no such file or directory
E0729 14:36:37.843829  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kindnet-513289/client.crt: no such file or directory
E0729 14:36:39.495222  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/flannel-513289/client.crt: no such file or directory
E0729 14:36:43.885296  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/bridge-513289/client.crt: no such file or directory
E0729 14:37:06.665749  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
E0729 14:37:22.664723  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/calico-513289/client.crt: no such file or directory
E0729 14:37:50.345992  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/calico-513289/client.crt: no such file or directory
E0729 14:37:59.775594  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/custom-flannel-513289/client.crt: no such file or directory
E0729 14:38:08.882251  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/enable-default-cni-513289/client.crt: no such file or directory
E0729 14:38:27.460791  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/custom-flannel-513289/client.crt: no such file or directory
E0729 14:38:29.719672  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
E0729 14:38:36.566140  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/enable-default-cni-513289/client.crt: no such file or directory
E0729 14:38:55.651995  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/flannel-513289/client.crt: no such file or directory
E0729 14:39:00.040006  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/bridge-513289/client.crt: no such file or directory
E0729 14:39:23.335779  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/flannel-513289/client.crt: no such file or directory
E0729 14:39:27.726269  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/bridge-513289/client.crt: no such file or directory
E0729 14:39:30.663225  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
E0729 14:40:31.595829  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/auto-513289/client.crt: no such file or directory
E0729 14:41:10.159756  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kindnet-513289/client.crt: no such file or directory
E0729 14:42:06.665871  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
E0729 14:42:22.664240  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/calico-513289/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-360866 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m44.071688305s)

                                                
                                                
-- stdout --
	* [old-k8s-version-360866] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19338
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-360866" primary control-plane node in "old-k8s-version-360866" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-360866" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 14:34:53.874295 1039759 out.go:291] Setting OutFile to fd 1 ...
	I0729 14:34:53.874567 1039759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:34:53.874577 1039759 out.go:304] Setting ErrFile to fd 2...
	I0729 14:34:53.874580 1039759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:34:53.874774 1039759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 14:34:53.875294 1039759 out.go:298] Setting JSON to false
	I0729 14:34:53.876313 1039759 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":15446,"bootTime":1722248248,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 14:34:53.876373 1039759 start.go:139] virtualization: kvm guest
	I0729 14:34:53.878446 1039759 out.go:177] * [old-k8s-version-360866] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 14:34:53.879820 1039759 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 14:34:53.879855 1039759 notify.go:220] Checking for updates...
	I0729 14:34:53.882201 1039759 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 14:34:53.883330 1039759 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:34:53.884514 1039759 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:34:53.885734 1039759 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 14:34:53.886894 1039759 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 14:34:53.888361 1039759 config.go:182] Loaded profile config "old-k8s-version-360866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 14:34:53.888789 1039759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:34:53.888850 1039759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:34:53.903960 1039759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I0729 14:34:53.904467 1039759 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:34:53.905083 1039759 main.go:141] libmachine: Using API Version  1
	I0729 14:34:53.905112 1039759 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:34:53.905449 1039759 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:34:53.905609 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:34:53.907360 1039759 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 14:34:53.908710 1039759 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 14:34:53.909026 1039759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:34:53.909064 1039759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:34:53.923834 1039759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0729 14:34:53.924300 1039759 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:34:53.924787 1039759 main.go:141] libmachine: Using API Version  1
	I0729 14:34:53.924809 1039759 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:34:53.925150 1039759 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:34:53.925352 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:34:53.960368 1039759 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 14:34:53.961649 1039759 start.go:297] selected driver: kvm2
	I0729 14:34:53.961662 1039759 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:34:53.961778 1039759 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 14:34:53.962398 1039759 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:34:53.962459 1039759 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19338-974764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 14:34:53.977941 1039759 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 14:34:53.978311 1039759 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:34:53.978341 1039759 cni.go:84] Creating CNI manager for ""
	I0729 14:34:53.978350 1039759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:34:53.978395 1039759 start.go:340] cluster config:
	{Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:34:53.978499 1039759 iso.go:125] acquiring lock: {Name:mk2bc72146110e230952d77b90cad2ea8182c9d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:34:53.980167 1039759 out.go:177] * Starting "old-k8s-version-360866" primary control-plane node in "old-k8s-version-360866" cluster
	I0729 14:34:53.981356 1039759 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 14:34:53.981390 1039759 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 14:34:53.981400 1039759 cache.go:56] Caching tarball of preloaded images
	I0729 14:34:53.981477 1039759 preload.go:172] Found /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 14:34:53.981487 1039759 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 14:34:53.981600 1039759 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/config.json ...
	I0729 14:34:53.981775 1039759 start.go:360] acquireMachinesLock for old-k8s-version-360866: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 14:38:12.129080 1039759 start.go:364] duration metric: took 3m18.14725367s to acquireMachinesLock for "old-k8s-version-360866"
	I0729 14:38:12.129155 1039759 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:38:12.129166 1039759 fix.go:54] fixHost starting: 
	I0729 14:38:12.129715 1039759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:12.129752 1039759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:12.146596 1039759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34517
	I0729 14:38:12.147101 1039759 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:12.147554 1039759 main.go:141] libmachine: Using API Version  1
	I0729 14:38:12.147581 1039759 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:12.147871 1039759 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:12.148094 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:12.148293 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetState
	I0729 14:38:12.149880 1039759 fix.go:112] recreateIfNeeded on old-k8s-version-360866: state=Stopped err=<nil>
	I0729 14:38:12.149918 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	W0729 14:38:12.150120 1039759 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:38:12.152003 1039759 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-360866" ...
	I0729 14:38:12.153214 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .Start
	I0729 14:38:12.153408 1039759 main.go:141] libmachine: (old-k8s-version-360866) Ensuring networks are active...
	I0729 14:38:12.154141 1039759 main.go:141] libmachine: (old-k8s-version-360866) Ensuring network default is active
	I0729 14:38:12.154590 1039759 main.go:141] libmachine: (old-k8s-version-360866) Ensuring network mk-old-k8s-version-360866 is active
	I0729 14:38:12.154970 1039759 main.go:141] libmachine: (old-k8s-version-360866) Getting domain xml...
	I0729 14:38:12.155733 1039759 main.go:141] libmachine: (old-k8s-version-360866) Creating domain...
	I0729 14:38:12.526504 1039759 main.go:141] libmachine: (old-k8s-version-360866) Waiting to get IP...
	I0729 14:38:12.527560 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:12.528068 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:12.528147 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:12.528048 1040622 retry.go:31] will retry after 240.079974ms: waiting for machine to come up
	I0729 14:38:12.769388 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:12.769881 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:12.769910 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:12.769829 1040622 retry.go:31] will retry after 271.200632ms: waiting for machine to come up
	I0729 14:38:13.042584 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:13.043069 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:13.043101 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:13.043049 1040622 retry.go:31] will retry after 464.725959ms: waiting for machine to come up
	I0729 14:38:13.509830 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:13.510400 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:13.510434 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:13.510350 1040622 retry.go:31] will retry after 416.316047ms: waiting for machine to come up
	I0729 14:38:13.927885 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:13.928343 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:13.928373 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:13.928307 1040622 retry.go:31] will retry after 659.670364ms: waiting for machine to come up
	I0729 14:38:14.589644 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:14.590143 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:14.590172 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:14.590031 1040622 retry.go:31] will retry after 738.020335ms: waiting for machine to come up
	I0729 14:38:15.330093 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:15.330603 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:15.330633 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:15.330553 1040622 retry.go:31] will retry after 1.13067902s: waiting for machine to come up
	I0729 14:38:16.462554 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:16.463002 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:16.463031 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:16.462977 1040622 retry.go:31] will retry after 1.342785853s: waiting for machine to come up
	I0729 14:38:17.806889 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:17.807333 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:17.807365 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:17.807266 1040622 retry.go:31] will retry after 1.804812934s: waiting for machine to come up
	I0729 14:38:19.613474 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:19.613801 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:19.613830 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:19.613749 1040622 retry.go:31] will retry after 1.449593132s: waiting for machine to come up
	I0729 14:38:21.064774 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:21.065382 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:21.065405 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:21.065314 1040622 retry.go:31] will retry after 1.807508073s: waiting for machine to come up
	I0729 14:38:22.874485 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:22.874896 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:22.874925 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:22.874844 1040622 retry.go:31] will retry after 3.036719557s: waiting for machine to come up
	I0729 14:38:25.913642 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:25.914139 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:25.914166 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:25.914099 1040622 retry.go:31] will retry after 3.839238383s: waiting for machine to come up
	I0729 14:38:29.755060 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.755512 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has current primary IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.755524 1039759 main.go:141] libmachine: (old-k8s-version-360866) Found IP for machine: 192.168.39.71
	I0729 14:38:29.755536 1039759 main.go:141] libmachine: (old-k8s-version-360866) Reserving static IP address...
	I0729 14:38:29.755975 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "old-k8s-version-360866", mac: "52:54:00:18:de:25", ip: "192.168.39.71"} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.756008 1039759 main.go:141] libmachine: (old-k8s-version-360866) Reserved static IP address: 192.168.39.71
	I0729 14:38:29.756035 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | skip adding static IP to network mk-old-k8s-version-360866 - found existing host DHCP lease matching {name: "old-k8s-version-360866", mac: "52:54:00:18:de:25", ip: "192.168.39.71"}
	I0729 14:38:29.756048 1039759 main.go:141] libmachine: (old-k8s-version-360866) Waiting for SSH to be available...
	I0729 14:38:29.756067 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | Getting to WaitForSSH function...
	I0729 14:38:29.758527 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.758899 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.758944 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.759003 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | Using SSH client type: external
	I0729 14:38:29.759024 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa (-rw-------)
	I0729 14:38:29.759058 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.71 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:38:29.759070 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | About to run SSH command:
	I0729 14:38:29.759083 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | exit 0
	I0729 14:38:29.884425 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | SSH cmd err, output: <nil>: 
	I0729 14:38:29.884833 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetConfigRaw
	I0729 14:38:29.885450 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:29.887929 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.888241 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.888294 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.888624 1039759 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/config.json ...
	I0729 14:38:29.888895 1039759 machine.go:94] provisionDockerMachine start ...
	I0729 14:38:29.888919 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:29.889221 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:29.891654 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.892013 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.892038 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.892163 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:29.892350 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.892598 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.892764 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:29.892968 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:29.893158 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:29.893169 1039759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:38:29.993529 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 14:38:29.993564 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetMachineName
	I0729 14:38:29.993859 1039759 buildroot.go:166] provisioning hostname "old-k8s-version-360866"
	I0729 14:38:29.993893 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetMachineName
	I0729 14:38:29.994074 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:29.996882 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.997279 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.997308 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.997537 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:29.997699 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.997856 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.997976 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:29.998206 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:29.998412 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:29.998429 1039759 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-360866 && echo "old-k8s-version-360866" | sudo tee /etc/hostname
	I0729 14:38:30.115298 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-360866
	
	I0729 14:38:30.115331 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.118349 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.118763 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.118793 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.119029 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:30.119203 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.119356 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.119561 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:30.119772 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:30.119976 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:30.120019 1039759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-360866' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-360866/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-360866' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:38:30.229987 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:38:30.230017 1039759 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:38:30.230059 1039759 buildroot.go:174] setting up certificates
	I0729 14:38:30.230070 1039759 provision.go:84] configureAuth start
	I0729 14:38:30.230090 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetMachineName
	I0729 14:38:30.230436 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:30.233150 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.233501 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.233533 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.233719 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.236157 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.236494 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.236534 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.236713 1039759 provision.go:143] copyHostCerts
	I0729 14:38:30.236786 1039759 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:38:30.236797 1039759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:38:30.236856 1039759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:38:30.236976 1039759 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:38:30.236986 1039759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:38:30.237006 1039759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:38:30.237071 1039759 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:38:30.237078 1039759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:38:30.237095 1039759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:38:30.237153 1039759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-360866 san=[127.0.0.1 192.168.39.71 localhost minikube old-k8s-version-360866]
	I0729 14:38:30.680859 1039759 provision.go:177] copyRemoteCerts
	I0729 14:38:30.680933 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:38:30.680970 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.683890 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.684229 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.684262 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.684430 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:30.684634 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.684822 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:30.684973 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:30.770659 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:38:30.799011 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 14:38:30.825536 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:38:30.850751 1039759 provision.go:87] duration metric: took 620.664228ms to configureAuth
	I0729 14:38:30.850795 1039759 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:38:30.850998 1039759 config.go:182] Loaded profile config "old-k8s-version-360866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 14:38:30.851072 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.853735 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.854065 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.854102 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.854197 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:30.854408 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.854559 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.854717 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:30.854961 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:30.855169 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:30.855187 1039759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:38:31.119354 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:38:31.119386 1039759 machine.go:97] duration metric: took 1.230472142s to provisionDockerMachine
	I0729 14:38:31.119401 1039759 start.go:293] postStartSetup for "old-k8s-version-360866" (driver="kvm2")
	I0729 14:38:31.119415 1039759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:38:31.119456 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.119885 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:38:31.119926 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.123196 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.123576 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.123607 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.123826 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.124053 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.124276 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.124469 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:31.208607 1039759 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:38:31.213173 1039759 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:38:31.213206 1039759 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:38:31.213268 1039759 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:38:31.213352 1039759 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:38:31.213454 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:38:31.225256 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:31.253156 1039759 start.go:296] duration metric: took 133.735669ms for postStartSetup
	I0729 14:38:31.253208 1039759 fix.go:56] duration metric: took 19.124042428s for fixHost
	I0729 14:38:31.253237 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.256005 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.256340 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.256375 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.256535 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.256732 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.256927 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.257075 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.257272 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:31.257445 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:31.257455 1039759 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 14:38:31.361488 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263911.340365932
	
	I0729 14:38:31.361512 1039759 fix.go:216] guest clock: 1722263911.340365932
	I0729 14:38:31.361519 1039759 fix.go:229] Guest: 2024-07-29 14:38:31.340365932 +0000 UTC Remote: 2024-07-29 14:38:31.253213714 +0000 UTC m=+217.413183116 (delta=87.152218ms)
	I0729 14:38:31.361572 1039759 fix.go:200] guest clock delta is within tolerance: 87.152218ms
	I0729 14:38:31.361583 1039759 start.go:83] releasing machines lock for "old-k8s-version-360866", held for 19.232453759s
	I0729 14:38:31.361611 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.361921 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:31.364981 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.365412 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.365441 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.365648 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.366227 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.366482 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.366583 1039759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:38:31.366644 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.366761 1039759 ssh_runner.go:195] Run: cat /version.json
	I0729 14:38:31.366797 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.369658 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.369699 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.370051 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.370081 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.370105 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.370125 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.370309 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.370325 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.370567 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.370568 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.370773 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.370809 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.370958 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:31.370957 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:31.472108 1039759 ssh_runner.go:195] Run: systemctl --version
	I0729 14:38:31.478939 1039759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:38:31.630720 1039759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:38:31.637768 1039759 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:38:31.637874 1039759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:38:31.655476 1039759 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:38:31.655504 1039759 start.go:495] detecting cgroup driver to use...
	I0729 14:38:31.655584 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:38:31.679387 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:38:31.704260 1039759 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:38:31.704318 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:38:31.727875 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:38:31.743197 1039759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:38:31.867502 1039759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:38:32.035088 1039759 docker.go:233] disabling docker service ...
	I0729 14:38:32.035169 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:38:32.050118 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:38:32.064828 1039759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:38:32.202938 1039759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:38:32.333330 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:38:32.348845 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:38:32.369848 1039759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 14:38:32.369923 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.381787 1039759 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:38:32.381893 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.394331 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.405323 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.417259 1039759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:38:32.428997 1039759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:38:32.440934 1039759 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:38:32.441003 1039759 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:38:32.454949 1039759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:38:32.466042 1039759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:32.596308 1039759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:38:32.762548 1039759 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:38:32.762632 1039759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:38:32.768336 1039759 start.go:563] Will wait 60s for crictl version
	I0729 14:38:32.768447 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:32.772850 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:38:32.829827 1039759 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:38:32.829936 1039759 ssh_runner.go:195] Run: crio --version
	I0729 14:38:32.863269 1039759 ssh_runner.go:195] Run: crio --version
	I0729 14:38:32.897768 1039759 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 14:38:32.899209 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:32.902257 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:32.902649 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:32.902680 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:32.902928 1039759 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 14:38:32.908590 1039759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:32.921952 1039759 kubeadm.go:883] updating cluster {Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:38:32.922094 1039759 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 14:38:32.922141 1039759 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:32.969932 1039759 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 14:38:32.970003 1039759 ssh_runner.go:195] Run: which lz4
	I0729 14:38:32.974564 1039759 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 14:38:32.980128 1039759 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 14:38:32.980173 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 14:38:34.707140 1039759 crio.go:462] duration metric: took 1.732619622s to copy over tarball
	I0729 14:38:34.707232 1039759 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 14:38:37.740076 1039759 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.032804006s)
	I0729 14:38:37.740105 1039759 crio.go:469] duration metric: took 3.032930405s to extract the tarball
	I0729 14:38:37.740113 1039759 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 14:38:37.786934 1039759 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:37.827451 1039759 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 14:38:37.827484 1039759 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 14:38:37.827576 1039759 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:37.827606 1039759 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:37.827624 1039759 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 14:38:37.827678 1039759 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:37.827702 1039759 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:37.827607 1039759 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:37.827683 1039759 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 14:38:37.827678 1039759 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:37.829621 1039759 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:37.829709 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:37.829714 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:37.829714 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:37.829724 1039759 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 14:38:37.829628 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:37.829808 1039759 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 14:38:37.829625 1039759 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:38.113249 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:38.373433 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:38.378382 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:38.380909 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:38.382431 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 14:38:38.391678 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:38.392565 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:38.419739 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 14:38:38.491174 1039759 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 14:38:38.491255 1039759 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:38.491320 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.570681 1039759 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 14:38:38.570784 1039759 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 14:38:38.570832 1039759 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 14:38:38.570889 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.570792 1039759 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:38.570721 1039759 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 14:38:38.570966 1039759 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:38.570977 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.570992 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.576687 1039759 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 14:38:38.576728 1039759 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:38.576769 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.587650 1039759 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 14:38:38.587699 1039759 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:38.587701 1039759 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 14:38:38.587738 1039759 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 14:38:38.587753 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.587791 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.587866 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:38.587883 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 14:38:38.587913 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:38.587948 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:38.591209 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:38.599567 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:38.610869 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 14:38:38.742939 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 14:38:38.742974 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 14:38:38.743091 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 14:38:38.743098 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 14:38:38.745789 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 14:38:38.745857 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 14:38:38.753643 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 14:38:38.753704 1039759 cache_images.go:92] duration metric: took 926.203812ms to LoadCachedImages
	W0729 14:38:38.753790 1039759 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0729 14:38:38.753804 1039759 kubeadm.go:934] updating node { 192.168.39.71 8443 v1.20.0 crio true true} ...
	I0729 14:38:38.753931 1039759 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-360866 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:38:38.753992 1039759 ssh_runner.go:195] Run: crio config
	I0729 14:38:38.802220 1039759 cni.go:84] Creating CNI manager for ""
	I0729 14:38:38.802246 1039759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:38:38.802258 1039759 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:38:38.802285 1039759 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.71 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-360866 NodeName:old-k8s-version-360866 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.71"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.71 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 14:38:38.802487 1039759 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.71
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-360866"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.71
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.71"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:38:38.802591 1039759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 14:38:38.816832 1039759 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:38:38.816934 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:38:38.827468 1039759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 14:38:38.847125 1039759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 14:38:38.865302 1039759 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 14:38:38.884267 1039759 ssh_runner.go:195] Run: grep 192.168.39.71	control-plane.minikube.internal$ /etc/hosts
	I0729 14:38:38.889206 1039759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.71	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:38.905643 1039759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:39.032065 1039759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:38:39.051892 1039759 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866 for IP: 192.168.39.71
	I0729 14:38:39.051991 1039759 certs.go:194] generating shared ca certs ...
	I0729 14:38:39.052019 1039759 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:39.052203 1039759 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:38:39.052258 1039759 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:38:39.052270 1039759 certs.go:256] generating profile certs ...
	I0729 14:38:39.091359 1039759 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/client.key
	I0729 14:38:39.091485 1039759 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.key.98c2aed0
	I0729 14:38:39.091554 1039759 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.key
	I0729 14:38:39.091718 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:38:39.091763 1039759 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:38:39.091776 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:38:39.091804 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:38:39.091835 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:38:39.091867 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:38:39.091924 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:39.092850 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:38:39.125528 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:38:39.153093 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:38:39.181324 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:38:39.235516 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 14:38:39.262599 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 14:38:39.293085 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:38:39.326318 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 14:38:39.361548 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:38:39.386876 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:38:39.412529 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:38:39.438418 1039759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:38:39.459519 1039759 ssh_runner.go:195] Run: openssl version
	I0729 14:38:39.466109 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:38:39.477941 1039759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:39.482748 1039759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:39.482820 1039759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:39.489099 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:38:39.500207 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:38:39.511513 1039759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:38:39.516125 1039759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:38:39.516183 1039759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:38:39.522297 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:38:39.533536 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:38:39.544996 1039759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:38:39.549681 1039759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:38:39.549733 1039759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:38:39.556332 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:38:39.571393 1039759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:38:39.578420 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:38:39.586316 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:38:39.593450 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:38:39.600604 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:38:39.607483 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:38:39.614692 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:38:39.621776 1039759 kubeadm.go:392] StartCluster: {Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:38:39.621893 1039759 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:38:39.621955 1039759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:38:39.673544 1039759 cri.go:89] found id: ""
	I0729 14:38:39.673634 1039759 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:38:39.687887 1039759 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 14:38:39.687912 1039759 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 14:38:39.687963 1039759 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 14:38:39.701616 1039759 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:38:39.702914 1039759 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-360866" does not appear in /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:38:39.703576 1039759 kubeconfig.go:62] /home/jenkins/minikube-integration/19338-974764/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-360866" cluster setting kubeconfig missing "old-k8s-version-360866" context setting]
	I0729 14:38:39.704951 1039759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:39.715056 1039759 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 14:38:39.728384 1039759 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.71
	I0729 14:38:39.728448 1039759 kubeadm.go:1160] stopping kube-system containers ...
	I0729 14:38:39.728466 1039759 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 14:38:39.728534 1039759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:38:39.778476 1039759 cri.go:89] found id: ""
	I0729 14:38:39.778561 1039759 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 14:38:39.800712 1039759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:38:39.813243 1039759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:38:39.813265 1039759 kubeadm.go:157] found existing configuration files:
	
	I0729 14:38:39.813323 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:38:39.824822 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:38:39.824897 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:38:39.834966 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:38:39.847660 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:38:39.847887 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:38:39.861117 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:38:39.873868 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:38:39.873936 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:38:39.884195 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:38:39.895155 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:38:39.895234 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:38:39.909138 1039759 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:38:39.920721 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:40.055932 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.173909 1039759 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.117933178s)
	I0729 14:38:41.173947 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.419684 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.550852 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.655941 1039759 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:38:41.656040 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:42.156080 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:42.656948 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:43.157127 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:43.656087 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:44.156583 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:44.657199 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:45.156268 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:45.656786 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:46.156393 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:46.656151 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:47.156507 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:47.656922 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:48.156840 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:48.656756 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:49.156539 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:49.656397 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:50.156909 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:50.656968 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:51.156321 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:51.656183 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:52.157099 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:52.656725 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:53.157009 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:53.656787 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:54.156921 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:54.656957 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:55.156201 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:55.656783 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:56.156180 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:56.656984 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:57.156610 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:57.656127 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:58.156785 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:58.656192 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:59.156740 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:59.656223 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:00.156726 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:00.656593 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:01.156115 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:01.656364 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:02.157069 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:02.656491 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:03.156938 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:03.656898 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:04.157177 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:04.656505 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:05.156530 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:05.656389 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:06.156606 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:06.657121 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:07.157048 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:07.656497 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:08.156327 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:08.656868 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:09.156858 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:09.656910 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.156126 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.657149 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:11.156223 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:11.657184 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:12.156454 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:12.656896 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:13.156693 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:13.656971 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:14.157127 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:14.656806 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:15.156564 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:15.656881 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:16.156239 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:16.656440 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:17.157130 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:17.656240 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:18.156161 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:18.656808 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:19.156721 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:19.656766 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:20.156352 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:20.656788 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:21.156179 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:21.656213 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:22.156475 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:22.656275 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:23.156592 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:23.656979 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:24.156798 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:24.656473 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:25.156551 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:25.656356 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:26.156887 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:26.656332 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:27.156494 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:27.656839 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:28.156763 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:28.656512 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:29.156096 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:29.656289 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:30.156693 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:30.656795 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:31.156756 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:31.656888 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:32.156563 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:32.656795 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:33.156271 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:33.656562 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:34.157046 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:34.656398 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:35.156198 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:35.656763 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:36.156542 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:36.656994 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:37.156808 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:37.657093 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:38.156119 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:38.657017 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:39.156909 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:39.656176 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:40.156455 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:40.656609 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:41.156891 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:41.656327 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:41.656423 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:41.701839 1039759 cri.go:89] found id: ""
	I0729 14:39:41.701863 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.701872 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:41.701878 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:41.701942 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:41.738281 1039759 cri.go:89] found id: ""
	I0729 14:39:41.738308 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.738315 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:41.738321 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:41.738377 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:41.771954 1039759 cri.go:89] found id: ""
	I0729 14:39:41.771981 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.771990 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:41.771996 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:41.772060 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:41.806157 1039759 cri.go:89] found id: ""
	I0729 14:39:41.806182 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.806190 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:41.806196 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:41.806249 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:41.841284 1039759 cri.go:89] found id: ""
	I0729 14:39:41.841312 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.841319 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:41.841325 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:41.841394 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:41.875864 1039759 cri.go:89] found id: ""
	I0729 14:39:41.875893 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.875902 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:41.875908 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:41.875962 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:41.909797 1039759 cri.go:89] found id: ""
	I0729 14:39:41.909824 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.909833 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:41.909840 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:41.909892 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:41.943886 1039759 cri.go:89] found id: ""
	I0729 14:39:41.943912 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.943920 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:41.943929 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:41.943944 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:41.983224 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:41.983254 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:42.035264 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:42.035303 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:42.049343 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:42.049369 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:42.171904 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:42.171924 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:42.171947 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:44.738629 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:44.753497 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:44.753582 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:44.793256 1039759 cri.go:89] found id: ""
	I0729 14:39:44.793283 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.793291 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:44.793298 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:44.793363 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:44.833698 1039759 cri.go:89] found id: ""
	I0729 14:39:44.833726 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.833733 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:44.833739 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:44.833792 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:44.876328 1039759 cri.go:89] found id: ""
	I0729 14:39:44.876357 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.876366 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:44.876372 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:44.876471 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:44.918091 1039759 cri.go:89] found id: ""
	I0729 14:39:44.918121 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.918132 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:44.918140 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:44.918210 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:44.965105 1039759 cri.go:89] found id: ""
	I0729 14:39:44.965137 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.965149 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:44.965157 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:44.965228 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:45.014119 1039759 cri.go:89] found id: ""
	I0729 14:39:45.014150 1039759 logs.go:276] 0 containers: []
	W0729 14:39:45.014162 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:45.014170 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:45.014243 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:45.059826 1039759 cri.go:89] found id: ""
	I0729 14:39:45.059858 1039759 logs.go:276] 0 containers: []
	W0729 14:39:45.059870 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:45.059879 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:45.059946 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:45.099666 1039759 cri.go:89] found id: ""
	I0729 14:39:45.099706 1039759 logs.go:276] 0 containers: []
	W0729 14:39:45.099717 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:45.099730 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:45.099748 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:45.144219 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:45.144263 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:45.199719 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:45.199754 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:45.214225 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:45.214260 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:45.289090 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:45.289119 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:45.289138 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:47.860797 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:47.874523 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:47.874606 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:47.913570 1039759 cri.go:89] found id: ""
	I0729 14:39:47.913599 1039759 logs.go:276] 0 containers: []
	W0729 14:39:47.913608 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:47.913615 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:47.913674 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:47.946699 1039759 cri.go:89] found id: ""
	I0729 14:39:47.946725 1039759 logs.go:276] 0 containers: []
	W0729 14:39:47.946734 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:47.946740 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:47.946792 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:47.986492 1039759 cri.go:89] found id: ""
	I0729 14:39:47.986533 1039759 logs.go:276] 0 containers: []
	W0729 14:39:47.986546 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:47.986554 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:47.986635 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:48.027232 1039759 cri.go:89] found id: ""
	I0729 14:39:48.027260 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.027268 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:48.027274 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:48.027327 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:48.065119 1039759 cri.go:89] found id: ""
	I0729 14:39:48.065145 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.065153 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:48.065159 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:48.065217 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:48.105087 1039759 cri.go:89] found id: ""
	I0729 14:39:48.105119 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.105128 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:48.105134 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:48.105199 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:48.144684 1039759 cri.go:89] found id: ""
	I0729 14:39:48.144718 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.144730 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:48.144737 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:48.144816 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:48.180350 1039759 cri.go:89] found id: ""
	I0729 14:39:48.180380 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.180389 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:48.180401 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:48.180436 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:48.259859 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:48.259905 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:48.301132 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:48.301163 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:48.352753 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:48.352795 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:48.365936 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:48.365969 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:48.434634 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:50.934903 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:50.948702 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:50.948787 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:50.982889 1039759 cri.go:89] found id: ""
	I0729 14:39:50.982917 1039759 logs.go:276] 0 containers: []
	W0729 14:39:50.982927 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:50.982933 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:50.983010 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:51.020679 1039759 cri.go:89] found id: ""
	I0729 14:39:51.020713 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.020726 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:51.020734 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:51.020818 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:51.055114 1039759 cri.go:89] found id: ""
	I0729 14:39:51.055147 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.055158 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:51.055166 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:51.055237 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:51.089053 1039759 cri.go:89] found id: ""
	I0729 14:39:51.089087 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.089099 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:51.089108 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:51.089184 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:51.125823 1039759 cri.go:89] found id: ""
	I0729 14:39:51.125851 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.125861 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:51.125868 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:51.125938 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:51.162645 1039759 cri.go:89] found id: ""
	I0729 14:39:51.162683 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.162694 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:51.162702 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:51.162767 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:51.196820 1039759 cri.go:89] found id: ""
	I0729 14:39:51.196849 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.196857 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:51.196864 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:51.196937 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:51.236442 1039759 cri.go:89] found id: ""
	I0729 14:39:51.236469 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.236479 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:51.236491 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:51.236506 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:51.317077 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:51.317101 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:51.317119 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:51.398118 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:51.398172 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:51.437096 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:51.437128 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:51.488949 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:51.488992 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:54.004536 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:54.019400 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:54.019480 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:54.054592 1039759 cri.go:89] found id: ""
	I0729 14:39:54.054626 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.054639 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:54.054647 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:54.054712 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:54.090184 1039759 cri.go:89] found id: ""
	I0729 14:39:54.090217 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.090227 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:54.090234 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:54.090304 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:54.129977 1039759 cri.go:89] found id: ""
	I0729 14:39:54.130007 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.130016 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:54.130022 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:54.130081 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:54.170940 1039759 cri.go:89] found id: ""
	I0729 14:39:54.170970 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.170980 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:54.170988 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:54.171053 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:54.206197 1039759 cri.go:89] found id: ""
	I0729 14:39:54.206224 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.206244 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:54.206251 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:54.206340 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:54.246929 1039759 cri.go:89] found id: ""
	I0729 14:39:54.246963 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.246973 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:54.246980 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:54.247049 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:54.286202 1039759 cri.go:89] found id: ""
	I0729 14:39:54.286231 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.286240 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:54.286245 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:54.286315 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:54.321784 1039759 cri.go:89] found id: ""
	I0729 14:39:54.321815 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.321824 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:54.321837 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:54.321860 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:54.363159 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:54.363187 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:54.416151 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:54.416194 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:54.429824 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:54.429852 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:54.506348 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:54.506373 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:54.506390 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:57.094810 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:57.108163 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:57.108238 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:57.143556 1039759 cri.go:89] found id: ""
	I0729 14:39:57.143588 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.143601 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:57.143608 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:57.143678 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:57.177664 1039759 cri.go:89] found id: ""
	I0729 14:39:57.177695 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.177706 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:57.177714 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:57.177801 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:57.212046 1039759 cri.go:89] found id: ""
	I0729 14:39:57.212106 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.212231 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:57.212249 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:57.212323 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:57.252518 1039759 cri.go:89] found id: ""
	I0729 14:39:57.252549 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.252558 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:57.252564 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:57.252677 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:57.287045 1039759 cri.go:89] found id: ""
	I0729 14:39:57.287069 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.287077 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:57.287084 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:57.287141 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:57.324553 1039759 cri.go:89] found id: ""
	I0729 14:39:57.324588 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.324599 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:57.324607 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:57.324684 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:57.358761 1039759 cri.go:89] found id: ""
	I0729 14:39:57.358801 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.358811 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:57.358819 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:57.358898 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:57.402023 1039759 cri.go:89] found id: ""
	I0729 14:39:57.402051 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.402062 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:57.402074 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:57.402094 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:57.445600 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:57.445632 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:57.501876 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:57.501911 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:57.518264 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:57.518299 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:57.593247 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:57.593274 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:57.593292 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:00.181109 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:00.194553 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:00.194641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:00.237761 1039759 cri.go:89] found id: ""
	I0729 14:40:00.237801 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.237814 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:00.237829 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:00.237901 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:00.273113 1039759 cri.go:89] found id: ""
	I0729 14:40:00.273145 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.273157 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:00.273166 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:00.273232 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:00.312136 1039759 cri.go:89] found id: ""
	I0729 14:40:00.312169 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.312176 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:00.312182 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:00.312249 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:00.349610 1039759 cri.go:89] found id: ""
	I0729 14:40:00.349642 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.349654 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:00.349662 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:00.349792 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:00.384121 1039759 cri.go:89] found id: ""
	I0729 14:40:00.384148 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.384157 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:00.384163 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:00.384233 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:00.419684 1039759 cri.go:89] found id: ""
	I0729 14:40:00.419720 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.419731 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:00.419739 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:00.419809 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:00.453905 1039759 cri.go:89] found id: ""
	I0729 14:40:00.453937 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.453945 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:00.453951 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:00.454023 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:00.490124 1039759 cri.go:89] found id: ""
	I0729 14:40:00.490149 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.490158 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:00.490168 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:00.490185 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:00.562684 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:00.562713 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:00.562735 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:00.643860 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:00.643899 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:00.683247 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:00.683276 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:00.734131 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:00.734166 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:03.249468 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:03.262712 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:03.262788 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:03.300774 1039759 cri.go:89] found id: ""
	I0729 14:40:03.300801 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.300816 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:03.300823 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:03.300891 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:03.335367 1039759 cri.go:89] found id: ""
	I0729 14:40:03.335398 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.335409 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:03.335419 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:03.335488 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:03.375683 1039759 cri.go:89] found id: ""
	I0729 14:40:03.375717 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.375728 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:03.375734 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:03.375814 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:03.409593 1039759 cri.go:89] found id: ""
	I0729 14:40:03.409623 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.409631 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:03.409637 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:03.409711 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:03.444531 1039759 cri.go:89] found id: ""
	I0729 14:40:03.444566 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.444578 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:03.444585 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:03.444655 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:03.479446 1039759 cri.go:89] found id: ""
	I0729 14:40:03.479476 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.479487 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:03.479495 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:03.479563 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:03.517277 1039759 cri.go:89] found id: ""
	I0729 14:40:03.517311 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.517321 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:03.517329 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:03.517396 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:03.556343 1039759 cri.go:89] found id: ""
	I0729 14:40:03.556373 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.556381 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:03.556391 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:03.556422 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:03.610156 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:03.610196 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:03.624776 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:03.624812 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:03.696584 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:03.696609 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:03.696625 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:03.775066 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:03.775109 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:06.319720 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:06.332865 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:06.332937 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:06.366576 1039759 cri.go:89] found id: ""
	I0729 14:40:06.366608 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.366631 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:06.366639 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:06.366730 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:06.402710 1039759 cri.go:89] found id: ""
	I0729 14:40:06.402735 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.402743 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:06.402748 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:06.402804 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:06.439048 1039759 cri.go:89] found id: ""
	I0729 14:40:06.439095 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.439116 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:06.439125 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:06.439196 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:06.473407 1039759 cri.go:89] found id: ""
	I0729 14:40:06.473443 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.473456 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:06.473464 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:06.473544 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:06.507278 1039759 cri.go:89] found id: ""
	I0729 14:40:06.507309 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.507319 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:06.507327 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:06.507396 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:06.541573 1039759 cri.go:89] found id: ""
	I0729 14:40:06.541600 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.541608 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:06.541617 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:06.541679 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:06.587666 1039759 cri.go:89] found id: ""
	I0729 14:40:06.587697 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.587707 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:06.587714 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:06.587773 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:06.622415 1039759 cri.go:89] found id: ""
	I0729 14:40:06.622448 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.622459 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:06.622478 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:06.622497 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:06.659987 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:06.660019 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:06.716303 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:06.716338 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:06.731051 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:06.731076 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:06.809014 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:06.809045 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:06.809064 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:09.387843 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:09.401894 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:09.401984 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:09.439385 1039759 cri.go:89] found id: ""
	I0729 14:40:09.439425 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.439438 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:09.439446 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:09.439506 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:09.474307 1039759 cri.go:89] found id: ""
	I0729 14:40:09.474340 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.474352 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:09.474361 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:09.474434 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:09.508198 1039759 cri.go:89] found id: ""
	I0729 14:40:09.508233 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.508245 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:09.508253 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:09.508325 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:09.543729 1039759 cri.go:89] found id: ""
	I0729 14:40:09.543762 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.543772 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:09.543779 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:09.543847 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:09.598723 1039759 cri.go:89] found id: ""
	I0729 14:40:09.598760 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.598769 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:09.598775 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:09.598841 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:09.636009 1039759 cri.go:89] found id: ""
	I0729 14:40:09.636038 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.636050 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:09.636058 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:09.636126 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:09.675590 1039759 cri.go:89] found id: ""
	I0729 14:40:09.675618 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.675628 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:09.675636 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:09.675698 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:09.710331 1039759 cri.go:89] found id: ""
	I0729 14:40:09.710374 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.710385 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:09.710397 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:09.710416 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:09.790014 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:09.790046 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:09.790064 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:09.870233 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:09.870278 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:09.910421 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:09.910454 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:09.962429 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:09.962474 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:12.476775 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:12.490804 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:12.490875 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:12.529435 1039759 cri.go:89] found id: ""
	I0729 14:40:12.529466 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.529478 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:12.529485 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:12.529551 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:12.564769 1039759 cri.go:89] found id: ""
	I0729 14:40:12.564806 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.564818 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:12.564826 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:12.564912 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:12.600253 1039759 cri.go:89] found id: ""
	I0729 14:40:12.600285 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.600296 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:12.600304 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:12.600367 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:12.636112 1039759 cri.go:89] found id: ""
	I0729 14:40:12.636146 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.636155 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:12.636161 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:12.636216 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:12.675592 1039759 cri.go:89] found id: ""
	I0729 14:40:12.675621 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.675631 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:12.675639 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:12.675711 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:12.711438 1039759 cri.go:89] found id: ""
	I0729 14:40:12.711469 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.711480 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:12.711488 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:12.711554 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:12.745497 1039759 cri.go:89] found id: ""
	I0729 14:40:12.745524 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.745533 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:12.745539 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:12.745598 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:12.778934 1039759 cri.go:89] found id: ""
	I0729 14:40:12.778966 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.778977 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:12.778991 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:12.779010 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:12.854721 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:12.854759 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:12.854780 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:12.932118 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:12.932158 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:12.974429 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:12.974461 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:13.030073 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:13.030108 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:15.544245 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:15.559013 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:15.559090 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:15.594018 1039759 cri.go:89] found id: ""
	I0729 14:40:15.594051 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.594064 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:15.594076 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:15.594147 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:15.630734 1039759 cri.go:89] found id: ""
	I0729 14:40:15.630762 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.630771 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:15.630777 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:15.630832 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:15.666159 1039759 cri.go:89] found id: ""
	I0729 14:40:15.666191 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.666202 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:15.666210 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:15.666275 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:15.701058 1039759 cri.go:89] found id: ""
	I0729 14:40:15.701088 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.701098 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:15.701115 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:15.701170 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:15.737006 1039759 cri.go:89] found id: ""
	I0729 14:40:15.737040 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.737052 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:15.737066 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:15.737139 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:15.775678 1039759 cri.go:89] found id: ""
	I0729 14:40:15.775706 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.775718 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:15.775728 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:15.775795 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:15.812239 1039759 cri.go:89] found id: ""
	I0729 14:40:15.812268 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.812276 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:15.812283 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:15.812348 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:15.847653 1039759 cri.go:89] found id: ""
	I0729 14:40:15.847682 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.847693 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:15.847707 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:15.847725 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:15.903094 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:15.903137 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:15.917060 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:15.917093 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:15.993458 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:15.993481 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:15.993499 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:16.073369 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:16.073409 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:18.616107 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:18.630263 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:18.630340 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:18.668228 1039759 cri.go:89] found id: ""
	I0729 14:40:18.668261 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.668271 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:18.668279 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:18.668342 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:18.706863 1039759 cri.go:89] found id: ""
	I0729 14:40:18.706891 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.706902 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:18.706909 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:18.706978 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:18.739703 1039759 cri.go:89] found id: ""
	I0729 14:40:18.739728 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.739736 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:18.739742 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:18.739796 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:18.777025 1039759 cri.go:89] found id: ""
	I0729 14:40:18.777066 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.777077 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:18.777085 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:18.777158 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:18.814000 1039759 cri.go:89] found id: ""
	I0729 14:40:18.814026 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.814039 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:18.814051 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:18.814116 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:18.851027 1039759 cri.go:89] found id: ""
	I0729 14:40:18.851058 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.851069 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:18.851076 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:18.851151 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:18.903888 1039759 cri.go:89] found id: ""
	I0729 14:40:18.903920 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.903932 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:18.903941 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:18.904002 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:18.938756 1039759 cri.go:89] found id: ""
	I0729 14:40:18.938784 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.938791 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:18.938801 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:18.938814 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:18.988482 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:18.988520 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:19.002145 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:19.002177 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:19.085372 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:19.085397 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:19.085424 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:19.171294 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:19.171387 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:21.709578 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:21.722874 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:21.722941 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:21.768110 1039759 cri.go:89] found id: ""
	I0729 14:40:21.768139 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.768150 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:21.768156 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:21.768210 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:21.808853 1039759 cri.go:89] found id: ""
	I0729 14:40:21.808886 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.808897 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:21.808905 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:21.808974 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:21.843432 1039759 cri.go:89] found id: ""
	I0729 14:40:21.843472 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.843484 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:21.843493 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:21.843576 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:21.876497 1039759 cri.go:89] found id: ""
	I0729 14:40:21.876535 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.876547 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:21.876555 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:21.876633 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:21.911528 1039759 cri.go:89] found id: ""
	I0729 14:40:21.911556 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.911565 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:21.911571 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:21.911626 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:21.944514 1039759 cri.go:89] found id: ""
	I0729 14:40:21.944548 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.944560 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:21.944569 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:21.944641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:21.978113 1039759 cri.go:89] found id: ""
	I0729 14:40:21.978151 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.978162 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:21.978170 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:21.978233 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:22.012390 1039759 cri.go:89] found id: ""
	I0729 14:40:22.012438 1039759 logs.go:276] 0 containers: []
	W0729 14:40:22.012449 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:22.012461 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:22.012484 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:22.027921 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:22.027952 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:22.095087 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:22.095115 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:22.095132 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:22.178462 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:22.178495 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:22.220155 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:22.220188 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:24.771932 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:24.784764 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:24.784851 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:24.820445 1039759 cri.go:89] found id: ""
	I0729 14:40:24.820473 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.820485 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:24.820501 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:24.820569 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:24.854753 1039759 cri.go:89] found id: ""
	I0729 14:40:24.854786 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.854796 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:24.854802 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:24.854856 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:24.889200 1039759 cri.go:89] found id: ""
	I0729 14:40:24.889230 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.889242 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:24.889250 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:24.889312 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:24.932383 1039759 cri.go:89] found id: ""
	I0729 14:40:24.932435 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.932447 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:24.932454 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:24.932515 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:24.971830 1039759 cri.go:89] found id: ""
	I0729 14:40:24.971859 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.971871 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:24.971879 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:24.971936 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:25.014336 1039759 cri.go:89] found id: ""
	I0729 14:40:25.014374 1039759 logs.go:276] 0 containers: []
	W0729 14:40:25.014386 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:25.014397 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:25.014464 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:25.048131 1039759 cri.go:89] found id: ""
	I0729 14:40:25.048161 1039759 logs.go:276] 0 containers: []
	W0729 14:40:25.048171 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:25.048177 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:25.048232 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:25.089830 1039759 cri.go:89] found id: ""
	I0729 14:40:25.089866 1039759 logs.go:276] 0 containers: []
	W0729 14:40:25.089878 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:25.089893 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:25.089907 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:25.172078 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:25.172113 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:25.221629 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:25.221661 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:25.291761 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:25.291806 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:25.314521 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:25.314559 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:25.402738 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:27.903335 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:27.918335 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:27.918413 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:27.951929 1039759 cri.go:89] found id: ""
	I0729 14:40:27.951955 1039759 logs.go:276] 0 containers: []
	W0729 14:40:27.951966 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:27.951972 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:27.952029 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:27.986229 1039759 cri.go:89] found id: ""
	I0729 14:40:27.986266 1039759 logs.go:276] 0 containers: []
	W0729 14:40:27.986279 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:27.986287 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:27.986352 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:28.019467 1039759 cri.go:89] found id: ""
	I0729 14:40:28.019504 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.019517 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:28.019524 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:28.019590 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:28.053762 1039759 cri.go:89] found id: ""
	I0729 14:40:28.053790 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.053799 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:28.053806 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:28.053858 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:28.088947 1039759 cri.go:89] found id: ""
	I0729 14:40:28.088975 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.088989 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:28.088996 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:28.089070 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:28.130018 1039759 cri.go:89] found id: ""
	I0729 14:40:28.130052 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.130064 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:28.130072 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:28.130143 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:28.163682 1039759 cri.go:89] found id: ""
	I0729 14:40:28.163715 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.163725 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:28.163734 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:28.163802 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:28.199698 1039759 cri.go:89] found id: ""
	I0729 14:40:28.199732 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.199744 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:28.199757 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:28.199774 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:28.253735 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:28.253776 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:28.267786 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:28.267825 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:28.337218 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:28.337250 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:28.337265 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:28.419644 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:28.419688 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:30.958707 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:30.972073 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:30.972146 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:31.016629 1039759 cri.go:89] found id: ""
	I0729 14:40:31.016662 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.016673 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:31.016681 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:31.016747 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:31.058891 1039759 cri.go:89] found id: ""
	I0729 14:40:31.058921 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.058930 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:31.058936 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:31.059004 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:31.096599 1039759 cri.go:89] found id: ""
	I0729 14:40:31.096633 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.096645 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:31.096654 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:31.096741 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:31.143525 1039759 cri.go:89] found id: ""
	I0729 14:40:31.143554 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.143562 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:31.143568 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:31.143628 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:31.180188 1039759 cri.go:89] found id: ""
	I0729 14:40:31.180220 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.180230 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:31.180239 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:31.180310 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:31.219995 1039759 cri.go:89] found id: ""
	I0729 14:40:31.220026 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.220037 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:31.220045 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:31.220108 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:31.254137 1039759 cri.go:89] found id: ""
	I0729 14:40:31.254182 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.254192 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:31.254200 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:31.254272 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:31.288065 1039759 cri.go:89] found id: ""
	I0729 14:40:31.288098 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.288109 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:31.288122 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:31.288137 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:31.341299 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:31.341338 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:31.355357 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:31.355387 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:31.427414 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:31.427439 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:31.427453 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:31.508372 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:31.508439 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:34.052770 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:34.066300 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:34.066366 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:34.104242 1039759 cri.go:89] found id: ""
	I0729 14:40:34.104278 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.104290 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:34.104299 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:34.104367 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:34.143092 1039759 cri.go:89] found id: ""
	I0729 14:40:34.143125 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.143137 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:34.143145 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:34.143216 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:34.177966 1039759 cri.go:89] found id: ""
	I0729 14:40:34.177993 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.178002 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:34.178011 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:34.178098 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:34.218325 1039759 cri.go:89] found id: ""
	I0729 14:40:34.218351 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.218361 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:34.218369 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:34.218437 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:34.256632 1039759 cri.go:89] found id: ""
	I0729 14:40:34.256665 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.256675 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:34.256683 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:34.256753 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:34.290713 1039759 cri.go:89] found id: ""
	I0729 14:40:34.290739 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.290747 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:34.290753 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:34.290816 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:34.331345 1039759 cri.go:89] found id: ""
	I0729 14:40:34.331378 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.331389 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:34.331397 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:34.331468 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:34.370184 1039759 cri.go:89] found id: ""
	I0729 14:40:34.370214 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.370226 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:34.370239 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:34.370256 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:34.448667 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:34.448709 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:34.492943 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:34.492974 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:34.548784 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:34.548827 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:34.565353 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:34.565389 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:34.639411 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:37.140039 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:37.153732 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:37.153806 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:37.189360 1039759 cri.go:89] found id: ""
	I0729 14:40:37.189389 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.189398 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:37.189404 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:37.189474 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:37.225790 1039759 cri.go:89] found id: ""
	I0729 14:40:37.225820 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.225831 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:37.225839 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:37.225914 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:37.261742 1039759 cri.go:89] found id: ""
	I0729 14:40:37.261772 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.261782 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:37.261791 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:37.261862 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:37.295791 1039759 cri.go:89] found id: ""
	I0729 14:40:37.295826 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.295835 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:37.295843 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:37.295908 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:37.331290 1039759 cri.go:89] found id: ""
	I0729 14:40:37.331324 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.331334 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:37.331343 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:37.331413 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:37.366150 1039759 cri.go:89] found id: ""
	I0729 14:40:37.366183 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.366195 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:37.366203 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:37.366273 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:37.400983 1039759 cri.go:89] found id: ""
	I0729 14:40:37.401019 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.401030 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:37.401038 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:37.401110 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:37.435333 1039759 cri.go:89] found id: ""
	I0729 14:40:37.435368 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.435379 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:37.435391 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:37.435407 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:37.488020 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:37.488057 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:37.501543 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:37.501573 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:37.576006 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:37.576033 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:37.576050 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:37.658600 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:37.658641 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:40.200763 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:40.216048 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:40.216121 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:40.253969 1039759 cri.go:89] found id: ""
	I0729 14:40:40.253996 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.254005 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:40.254012 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:40.254078 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:40.289557 1039759 cri.go:89] found id: ""
	I0729 14:40:40.289595 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.289608 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:40.289616 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:40.289698 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:40.329756 1039759 cri.go:89] found id: ""
	I0729 14:40:40.329799 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.329823 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:40.329833 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:40.329906 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:40.365281 1039759 cri.go:89] found id: ""
	I0729 14:40:40.365315 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.365327 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:40.365335 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:40.365403 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:40.401300 1039759 cri.go:89] found id: ""
	I0729 14:40:40.401327 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.401336 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:40.401342 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:40.401398 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:40.435679 1039759 cri.go:89] found id: ""
	I0729 14:40:40.435710 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.435719 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:40.435726 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:40.435781 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:40.475825 1039759 cri.go:89] found id: ""
	I0729 14:40:40.475851 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.475859 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:40.475866 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:40.475926 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:40.512153 1039759 cri.go:89] found id: ""
	I0729 14:40:40.512184 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.512191 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:40.512202 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:40.512215 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:40.563983 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:40.564022 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:40.578823 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:40.578853 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:40.650282 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:40.650311 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:40.650328 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:40.734933 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:40.734980 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:43.280095 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:43.294284 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:43.294361 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:43.328862 1039759 cri.go:89] found id: ""
	I0729 14:40:43.328890 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.328899 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:43.328905 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:43.328971 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:43.366321 1039759 cri.go:89] found id: ""
	I0729 14:40:43.366364 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.366376 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:43.366384 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:43.366459 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:43.400189 1039759 cri.go:89] found id: ""
	I0729 14:40:43.400220 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.400229 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:43.400235 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:43.400299 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:43.438521 1039759 cri.go:89] found id: ""
	I0729 14:40:43.438562 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.438582 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:43.438594 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:43.438665 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:43.473931 1039759 cri.go:89] found id: ""
	I0729 14:40:43.473958 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.473966 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:43.473972 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:43.474035 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:43.511460 1039759 cri.go:89] found id: ""
	I0729 14:40:43.511490 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.511497 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:43.511506 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:43.511563 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:43.547255 1039759 cri.go:89] found id: ""
	I0729 14:40:43.547290 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.547301 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:43.547309 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:43.547375 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:43.582384 1039759 cri.go:89] found id: ""
	I0729 14:40:43.582418 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.582428 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:43.582441 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:43.582459 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:43.595747 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:43.595780 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:43.665389 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:43.665413 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:43.665427 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:43.752669 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:43.752712 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:43.797239 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:43.797272 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:46.352841 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:46.368204 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:46.368278 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:46.406661 1039759 cri.go:89] found id: ""
	I0729 14:40:46.406687 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.406695 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:46.406701 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:46.406761 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:46.443728 1039759 cri.go:89] found id: ""
	I0729 14:40:46.443760 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.443771 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:46.443778 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:46.443845 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:46.477632 1039759 cri.go:89] found id: ""
	I0729 14:40:46.477666 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.477677 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:46.477686 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:46.477754 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:46.512510 1039759 cri.go:89] found id: ""
	I0729 14:40:46.512538 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.512549 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:46.512557 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:46.512629 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:46.550803 1039759 cri.go:89] found id: ""
	I0729 14:40:46.550834 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.550843 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:46.550848 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:46.550914 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:46.591610 1039759 cri.go:89] found id: ""
	I0729 14:40:46.591640 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.591652 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:46.591661 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:46.591723 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:46.631090 1039759 cri.go:89] found id: ""
	I0729 14:40:46.631122 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.631132 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:46.631139 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:46.631199 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:46.670510 1039759 cri.go:89] found id: ""
	I0729 14:40:46.670542 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.670554 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:46.670573 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:46.670590 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:46.725560 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:46.725594 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:46.739348 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:46.739372 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:46.812850 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:46.812874 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:46.812892 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:46.892922 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:46.892964 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:49.438741 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:49.452505 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:49.452588 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:49.487294 1039759 cri.go:89] found id: ""
	I0729 14:40:49.487323 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.487331 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:49.487340 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:49.487407 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:49.521783 1039759 cri.go:89] found id: ""
	I0729 14:40:49.521816 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.521828 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:49.521836 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:49.521901 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:49.557039 1039759 cri.go:89] found id: ""
	I0729 14:40:49.557075 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.557086 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:49.557094 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:49.557162 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:49.590431 1039759 cri.go:89] found id: ""
	I0729 14:40:49.590462 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.590474 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:49.590494 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:49.590574 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:49.626230 1039759 cri.go:89] found id: ""
	I0729 14:40:49.626260 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.626268 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:49.626274 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:49.626339 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:49.662030 1039759 cri.go:89] found id: ""
	I0729 14:40:49.662060 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.662068 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:49.662075 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:49.662130 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:49.699988 1039759 cri.go:89] found id: ""
	I0729 14:40:49.700019 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.700035 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:49.700076 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:49.700144 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:49.736830 1039759 cri.go:89] found id: ""
	I0729 14:40:49.736864 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.736873 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:49.736882 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:49.736895 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:49.775670 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:49.775703 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:49.830820 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:49.830853 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:49.846374 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:49.846407 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:49.917475 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:49.917502 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:49.917520 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:52.499291 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:52.513571 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:52.513641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:52.547437 1039759 cri.go:89] found id: ""
	I0729 14:40:52.547474 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.547487 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:52.547495 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:52.547559 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:52.587664 1039759 cri.go:89] found id: ""
	I0729 14:40:52.587705 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.587718 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:52.587726 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:52.587799 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:52.630642 1039759 cri.go:89] found id: ""
	I0729 14:40:52.630670 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.630678 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:52.630684 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:52.630740 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:52.665978 1039759 cri.go:89] found id: ""
	I0729 14:40:52.666010 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.666022 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:52.666030 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:52.666103 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:52.701111 1039759 cri.go:89] found id: ""
	I0729 14:40:52.701140 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.701148 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:52.701155 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:52.701211 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:52.744219 1039759 cri.go:89] found id: ""
	I0729 14:40:52.744247 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.744257 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:52.744265 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:52.744329 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:52.781081 1039759 cri.go:89] found id: ""
	I0729 14:40:52.781113 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.781122 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:52.781128 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:52.781198 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:52.817938 1039759 cri.go:89] found id: ""
	I0729 14:40:52.817974 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.817985 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:52.817999 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:52.818016 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:52.895387 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:52.895416 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:52.895433 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:52.976313 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:52.976356 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:53.013814 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:53.013852 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:53.065901 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:53.065937 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:55.580590 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:55.595023 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:55.595108 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:55.631449 1039759 cri.go:89] found id: ""
	I0729 14:40:55.631479 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.631487 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:55.631494 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:55.631551 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:55.666245 1039759 cri.go:89] found id: ""
	I0729 14:40:55.666274 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.666283 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:55.666296 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:55.666364 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:55.706582 1039759 cri.go:89] found id: ""
	I0729 14:40:55.706611 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.706621 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:55.706629 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:55.706696 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:55.741930 1039759 cri.go:89] found id: ""
	I0729 14:40:55.741962 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.741973 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:55.741990 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:55.742058 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:55.781440 1039759 cri.go:89] found id: ""
	I0729 14:40:55.781475 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.781486 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:55.781494 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:55.781599 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:55.825329 1039759 cri.go:89] found id: ""
	I0729 14:40:55.825366 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.825377 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:55.825387 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:55.825466 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:55.860834 1039759 cri.go:89] found id: ""
	I0729 14:40:55.860866 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.860878 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:55.860886 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:55.860950 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:55.895460 1039759 cri.go:89] found id: ""
	I0729 14:40:55.895492 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.895502 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:55.895514 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:55.895531 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:55.951739 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:55.951781 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:55.965760 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:55.965792 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:56.044422 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:56.044458 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:56.044477 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:56.123669 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:56.123714 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:58.668279 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:58.682912 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:58.682974 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:58.718757 1039759 cri.go:89] found id: ""
	I0729 14:40:58.718787 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.718798 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:58.718807 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:58.718861 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:58.756986 1039759 cri.go:89] found id: ""
	I0729 14:40:58.757015 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.757025 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:58.757031 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:58.757092 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:58.797572 1039759 cri.go:89] found id: ""
	I0729 14:40:58.797600 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.797611 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:58.797620 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:58.797689 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:58.839410 1039759 cri.go:89] found id: ""
	I0729 14:40:58.839442 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.839453 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:58.839461 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:58.839523 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:58.874477 1039759 cri.go:89] found id: ""
	I0729 14:40:58.874508 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.874519 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:58.874528 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:58.874602 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:58.910248 1039759 cri.go:89] found id: ""
	I0729 14:40:58.910281 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.910296 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:58.910307 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:58.910368 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:58.944845 1039759 cri.go:89] found id: ""
	I0729 14:40:58.944879 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.944890 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:58.944896 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:58.944955 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:58.978818 1039759 cri.go:89] found id: ""
	I0729 14:40:58.978854 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.978867 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:58.978879 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:58.978898 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:59.018961 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:59.018993 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:59.069883 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:59.069920 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:59.083277 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:59.083304 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:59.159470 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:59.159494 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:59.159511 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:01.746915 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:01.759883 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:01.759949 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:01.796563 1039759 cri.go:89] found id: ""
	I0729 14:41:01.796589 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.796602 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:01.796608 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:01.796691 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:01.831464 1039759 cri.go:89] found id: ""
	I0729 14:41:01.831499 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.831511 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:01.831520 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:01.831586 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:01.868633 1039759 cri.go:89] found id: ""
	I0729 14:41:01.868660 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.868668 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:01.868674 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:01.868732 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:01.903154 1039759 cri.go:89] found id: ""
	I0729 14:41:01.903183 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.903194 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:01.903202 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:01.903272 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:01.938256 1039759 cri.go:89] found id: ""
	I0729 14:41:01.938292 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.938304 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:01.938312 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:01.938384 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:01.978117 1039759 cri.go:89] found id: ""
	I0729 14:41:01.978147 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.978159 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:01.978168 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:01.978242 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:02.014061 1039759 cri.go:89] found id: ""
	I0729 14:41:02.014089 1039759 logs.go:276] 0 containers: []
	W0729 14:41:02.014100 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:02.014108 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:02.014176 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:02.050133 1039759 cri.go:89] found id: ""
	I0729 14:41:02.050165 1039759 logs.go:276] 0 containers: []
	W0729 14:41:02.050177 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:02.050189 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:02.050206 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:02.101188 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:02.101253 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:02.114343 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:02.114369 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:02.190309 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:02.190338 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:02.190354 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:02.266895 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:02.266939 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:04.809474 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:04.824652 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:04.824725 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:04.858442 1039759 cri.go:89] found id: ""
	I0729 14:41:04.858474 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.858483 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:04.858490 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:04.858542 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:04.893199 1039759 cri.go:89] found id: ""
	I0729 14:41:04.893229 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.893237 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:04.893243 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:04.893297 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:04.929480 1039759 cri.go:89] found id: ""
	I0729 14:41:04.929512 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.929524 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:04.929532 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:04.929601 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:04.965097 1039759 cri.go:89] found id: ""
	I0729 14:41:04.965127 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.965139 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:04.965147 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:04.965228 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:05.003419 1039759 cri.go:89] found id: ""
	I0729 14:41:05.003449 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.003460 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:05.003467 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:05.003557 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:05.037408 1039759 cri.go:89] found id: ""
	I0729 14:41:05.037439 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.037451 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:05.037458 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:05.037527 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:05.072909 1039759 cri.go:89] found id: ""
	I0729 14:41:05.072942 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.072953 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:05.072961 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:05.073034 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:05.123731 1039759 cri.go:89] found id: ""
	I0729 14:41:05.123764 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.123776 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:05.123787 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:05.123802 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:05.188687 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:05.188732 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:05.204119 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:05.204160 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:05.294702 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:05.294732 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:05.294750 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:05.377412 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:05.377456 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:07.923437 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:07.937633 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:07.937711 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:07.976813 1039759 cri.go:89] found id: ""
	I0729 14:41:07.976850 1039759 logs.go:276] 0 containers: []
	W0729 14:41:07.976861 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:07.976872 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:07.976946 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:08.013051 1039759 cri.go:89] found id: ""
	I0729 14:41:08.013089 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.013100 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:08.013109 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:08.013177 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:08.047372 1039759 cri.go:89] found id: ""
	I0729 14:41:08.047404 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.047413 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:08.047420 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:08.047477 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:08.080555 1039759 cri.go:89] found id: ""
	I0729 14:41:08.080594 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.080607 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:08.080615 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:08.080684 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:08.117054 1039759 cri.go:89] found id: ""
	I0729 14:41:08.117087 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.117098 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:08.117106 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:08.117175 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:08.152270 1039759 cri.go:89] found id: ""
	I0729 14:41:08.152295 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.152303 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:08.152309 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:08.152373 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:08.188804 1039759 cri.go:89] found id: ""
	I0729 14:41:08.188830 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.188842 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:08.188848 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:08.188903 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:08.225101 1039759 cri.go:89] found id: ""
	I0729 14:41:08.225139 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.225151 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:08.225164 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:08.225182 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:08.278721 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:08.278759 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:08.293417 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:08.293453 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:08.371802 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:08.371825 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:08.371843 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:08.452233 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:08.452274 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:10.993379 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:11.007599 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:11.007668 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:11.045603 1039759 cri.go:89] found id: ""
	I0729 14:41:11.045652 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.045675 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:11.045683 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:11.045746 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:11.079682 1039759 cri.go:89] found id: ""
	I0729 14:41:11.079711 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.079722 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:11.079730 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:11.079797 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:11.122138 1039759 cri.go:89] found id: ""
	I0729 14:41:11.122167 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.122180 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:11.122185 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:11.122249 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:11.157416 1039759 cri.go:89] found id: ""
	I0729 14:41:11.157444 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.157452 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:11.157458 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:11.157514 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:11.198589 1039759 cri.go:89] found id: ""
	I0729 14:41:11.198631 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.198643 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:11.198652 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:11.198725 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:11.238329 1039759 cri.go:89] found id: ""
	I0729 14:41:11.238360 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.238369 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:11.238376 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:11.238442 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:11.273283 1039759 cri.go:89] found id: ""
	I0729 14:41:11.273313 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.273322 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:11.273328 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:11.273382 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:11.313927 1039759 cri.go:89] found id: ""
	I0729 14:41:11.313972 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.313984 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:11.313997 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:11.314014 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:11.366507 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:11.366546 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:11.380529 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:11.380566 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:11.451839 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:11.451862 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:11.451882 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:11.537109 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:11.537150 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:14.104794 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:14.117474 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:14.117541 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:14.154117 1039759 cri.go:89] found id: ""
	I0729 14:41:14.154151 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.154163 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:14.154171 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:14.154236 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:14.195762 1039759 cri.go:89] found id: ""
	I0729 14:41:14.195793 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.195804 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:14.195812 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:14.195875 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:14.231434 1039759 cri.go:89] found id: ""
	I0729 14:41:14.231460 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.231467 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:14.231474 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:14.231523 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:14.264802 1039759 cri.go:89] found id: ""
	I0729 14:41:14.264839 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.264851 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:14.264859 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:14.264932 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:14.300162 1039759 cri.go:89] found id: ""
	I0729 14:41:14.300184 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.300194 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:14.300202 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:14.300262 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:14.335351 1039759 cri.go:89] found id: ""
	I0729 14:41:14.335385 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.335396 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:14.335404 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:14.335468 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:14.370064 1039759 cri.go:89] found id: ""
	I0729 14:41:14.370096 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.370107 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:14.370115 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:14.370184 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:14.406506 1039759 cri.go:89] found id: ""
	I0729 14:41:14.406538 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.406549 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:14.406562 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:14.406579 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:14.445641 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:14.445681 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:14.496132 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:14.496165 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:14.509732 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:14.509767 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:14.581519 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:14.581541 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:14.581558 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:17.164487 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:17.178359 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:17.178447 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:17.213780 1039759 cri.go:89] found id: ""
	I0729 14:41:17.213869 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.213887 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:17.213896 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:17.213966 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:17.251006 1039759 cri.go:89] found id: ""
	I0729 14:41:17.251045 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.251056 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:17.251063 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:17.251135 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:17.306624 1039759 cri.go:89] found id: ""
	I0729 14:41:17.306654 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.306683 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:17.306691 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:17.306775 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:17.358882 1039759 cri.go:89] found id: ""
	I0729 14:41:17.358915 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.358927 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:17.358935 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:17.359008 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:17.408592 1039759 cri.go:89] found id: ""
	I0729 14:41:17.408620 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.408636 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:17.408642 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:17.408705 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:17.445201 1039759 cri.go:89] found id: ""
	I0729 14:41:17.445228 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.445236 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:17.445242 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:17.445305 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:17.477441 1039759 cri.go:89] found id: ""
	I0729 14:41:17.477483 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.477511 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:17.477518 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:17.477591 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:17.509148 1039759 cri.go:89] found id: ""
	I0729 14:41:17.509179 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.509190 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:17.509203 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:17.509220 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:17.559784 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:17.559823 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:17.574163 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:17.574199 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:17.644249 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:17.644277 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:17.644294 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:17.720652 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:17.720688 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:20.261591 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:20.274649 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:20.274731 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:20.311561 1039759 cri.go:89] found id: ""
	I0729 14:41:20.311591 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.311600 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:20.311606 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:20.311668 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:20.350267 1039759 cri.go:89] found id: ""
	I0729 14:41:20.350300 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.350313 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:20.350322 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:20.350379 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:20.384183 1039759 cri.go:89] found id: ""
	I0729 14:41:20.384213 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.384220 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:20.384227 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:20.384288 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:20.422330 1039759 cri.go:89] found id: ""
	I0729 14:41:20.422358 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.422367 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:20.422373 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:20.422442 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:20.465537 1039759 cri.go:89] found id: ""
	I0729 14:41:20.465568 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.465577 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:20.465586 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:20.465663 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:20.507661 1039759 cri.go:89] found id: ""
	I0729 14:41:20.507691 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.507701 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:20.507710 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:20.507774 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:20.545830 1039759 cri.go:89] found id: ""
	I0729 14:41:20.545857 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.545866 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:20.545872 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:20.545936 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:20.586311 1039759 cri.go:89] found id: ""
	I0729 14:41:20.586345 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.586354 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:20.586364 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:20.586379 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:20.635183 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:20.635224 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:20.649660 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:20.649701 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:20.729588 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:20.729613 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:20.729632 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:20.811565 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:20.811605 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:23.354318 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:23.367784 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:23.367862 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:23.401929 1039759 cri.go:89] found id: ""
	I0729 14:41:23.401956 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.401965 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:23.401970 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:23.402033 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:23.437130 1039759 cri.go:89] found id: ""
	I0729 14:41:23.437161 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.437185 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:23.437205 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:23.437267 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:23.474029 1039759 cri.go:89] found id: ""
	I0729 14:41:23.474066 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.474078 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:23.474087 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:23.474159 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:23.506678 1039759 cri.go:89] found id: ""
	I0729 14:41:23.506714 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.506725 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:23.506732 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:23.506791 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:23.541578 1039759 cri.go:89] found id: ""
	I0729 14:41:23.541618 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.541628 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:23.541636 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:23.541709 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:23.575852 1039759 cri.go:89] found id: ""
	I0729 14:41:23.575883 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.575891 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:23.575898 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:23.575955 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:23.610611 1039759 cri.go:89] found id: ""
	I0729 14:41:23.610638 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.610646 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:23.610653 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:23.610717 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:23.650403 1039759 cri.go:89] found id: ""
	I0729 14:41:23.650429 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.650438 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:23.650448 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:23.650460 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:23.701856 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:23.701899 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:23.716925 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:23.716958 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:23.790678 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:23.790699 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:23.790717 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:23.873204 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:23.873242 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:26.414319 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:26.428069 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:26.428152 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:26.462538 1039759 cri.go:89] found id: ""
	I0729 14:41:26.462578 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.462590 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:26.462599 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:26.462687 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:26.496461 1039759 cri.go:89] found id: ""
	I0729 14:41:26.496501 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.496513 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:26.496521 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:26.496593 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:26.534152 1039759 cri.go:89] found id: ""
	I0729 14:41:26.534190 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.534203 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:26.534210 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:26.534273 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:26.572986 1039759 cri.go:89] found id: ""
	I0729 14:41:26.573016 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.573024 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:26.573030 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:26.573097 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:26.607330 1039759 cri.go:89] found id: ""
	I0729 14:41:26.607359 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.607370 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:26.607378 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:26.607445 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:26.643023 1039759 cri.go:89] found id: ""
	I0729 14:41:26.643056 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.643067 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:26.643078 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:26.643145 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:26.679820 1039759 cri.go:89] found id: ""
	I0729 14:41:26.679846 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.679856 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:26.679865 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:26.679930 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:26.716433 1039759 cri.go:89] found id: ""
	I0729 14:41:26.716462 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.716470 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:26.716480 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:26.716494 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:26.794508 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:26.794529 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:26.794542 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:26.876663 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:26.876701 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:26.917309 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:26.917343 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:26.969397 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:26.969436 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:29.483935 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:29.497502 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:29.497585 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:29.532671 1039759 cri.go:89] found id: ""
	I0729 14:41:29.532698 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.532712 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:29.532719 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:29.532784 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:29.568058 1039759 cri.go:89] found id: ""
	I0729 14:41:29.568085 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.568096 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:29.568103 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:29.568176 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:29.601173 1039759 cri.go:89] found id: ""
	I0729 14:41:29.601206 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.601216 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:29.601225 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:29.601284 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:29.634333 1039759 cri.go:89] found id: ""
	I0729 14:41:29.634372 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.634384 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:29.634393 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:29.634460 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:29.669669 1039759 cri.go:89] found id: ""
	I0729 14:41:29.669698 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.669706 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:29.669712 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:29.669777 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:29.702847 1039759 cri.go:89] found id: ""
	I0729 14:41:29.702876 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.702886 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:29.702894 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:29.702960 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:29.740713 1039759 cri.go:89] found id: ""
	I0729 14:41:29.740743 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.740754 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:29.740762 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:29.740846 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:29.777795 1039759 cri.go:89] found id: ""
	I0729 14:41:29.777829 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.777841 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:29.777853 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:29.777869 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:29.858713 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:29.858758 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:29.896873 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:29.896914 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:29.946905 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:29.946945 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:29.960136 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:29.960170 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:30.035951 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:32.536130 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:32.549431 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:32.549501 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:32.586069 1039759 cri.go:89] found id: ""
	I0729 14:41:32.586098 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.586117 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:32.586125 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:32.586183 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:32.623094 1039759 cri.go:89] found id: ""
	I0729 14:41:32.623123 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.623132 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:32.623138 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:32.623205 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:32.658370 1039759 cri.go:89] found id: ""
	I0729 14:41:32.658406 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.658418 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:32.658426 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:32.658492 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:32.696436 1039759 cri.go:89] found id: ""
	I0729 14:41:32.696469 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.696478 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:32.696484 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:32.696551 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:32.731306 1039759 cri.go:89] found id: ""
	I0729 14:41:32.731340 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.731352 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:32.731361 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:32.731431 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:32.767049 1039759 cri.go:89] found id: ""
	I0729 14:41:32.767087 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.767098 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:32.767106 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:32.767179 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:32.805094 1039759 cri.go:89] found id: ""
	I0729 14:41:32.805126 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.805138 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:32.805147 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:32.805223 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:32.840088 1039759 cri.go:89] found id: ""
	I0729 14:41:32.840116 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.840125 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:32.840137 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:32.840155 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:32.854065 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:32.854095 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:32.921447 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:32.921477 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:32.921493 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:33.005086 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:33.005129 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:33.042555 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:33.042617 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:35.593173 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:35.605965 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:35.606031 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:35.639315 1039759 cri.go:89] found id: ""
	I0729 14:41:35.639355 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.639367 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:35.639374 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:35.639466 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:35.678657 1039759 cri.go:89] found id: ""
	I0729 14:41:35.678686 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.678695 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:35.678700 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:35.678764 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:35.714108 1039759 cri.go:89] found id: ""
	I0729 14:41:35.714136 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.714147 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:35.714155 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:35.714220 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:35.748793 1039759 cri.go:89] found id: ""
	I0729 14:41:35.748820 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.748831 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:35.748837 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:35.748891 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:35.788853 1039759 cri.go:89] found id: ""
	I0729 14:41:35.788884 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.788895 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:35.788903 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:35.788971 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:35.825032 1039759 cri.go:89] found id: ""
	I0729 14:41:35.825059 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.825067 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:35.825074 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:35.825126 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:35.859990 1039759 cri.go:89] found id: ""
	I0729 14:41:35.860022 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.860033 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:35.860041 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:35.860131 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:35.894318 1039759 cri.go:89] found id: ""
	I0729 14:41:35.894352 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.894364 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:35.894377 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:35.894393 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:35.907591 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:35.907617 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:35.975000 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:35.975023 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:35.975040 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:36.056188 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:36.056226 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:36.094569 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:36.094606 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:38.648685 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:38.661546 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:38.661612 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:38.698658 1039759 cri.go:89] found id: ""
	I0729 14:41:38.698692 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.698704 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:38.698711 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:38.698797 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:38.731239 1039759 cri.go:89] found id: ""
	I0729 14:41:38.731274 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.731282 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:38.731288 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:38.731341 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:38.766549 1039759 cri.go:89] found id: ""
	I0729 14:41:38.766583 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.766594 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:38.766602 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:38.766663 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:38.803347 1039759 cri.go:89] found id: ""
	I0729 14:41:38.803374 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.803385 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:38.803393 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:38.803467 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:38.840327 1039759 cri.go:89] found id: ""
	I0729 14:41:38.840363 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.840374 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:38.840384 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:38.840480 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:38.874181 1039759 cri.go:89] found id: ""
	I0729 14:41:38.874211 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.874219 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:38.874225 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:38.874293 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:38.908642 1039759 cri.go:89] found id: ""
	I0729 14:41:38.908674 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.908686 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:38.908694 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:38.908762 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:38.945081 1039759 cri.go:89] found id: ""
	I0729 14:41:38.945107 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.945116 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:38.945126 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:38.945140 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:38.999792 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:38.999826 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:39.013396 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:39.013421 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:39.077975 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:39.077998 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:39.078016 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:39.169606 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:39.169654 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:41.716258 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:41.730508 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:41.730579 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:41.766457 1039759 cri.go:89] found id: ""
	I0729 14:41:41.766490 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.766498 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:41.766505 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:41.766571 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:41.801073 1039759 cri.go:89] found id: ""
	I0729 14:41:41.801099 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.801109 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:41.801117 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:41.801178 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:41.836962 1039759 cri.go:89] found id: ""
	I0729 14:41:41.836986 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.836997 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:41.837005 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:41.837072 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:41.870169 1039759 cri.go:89] found id: ""
	I0729 14:41:41.870195 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.870205 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:41.870213 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:41.870274 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:41.902298 1039759 cri.go:89] found id: ""
	I0729 14:41:41.902323 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.902331 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:41.902337 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:41.902387 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:41.935394 1039759 cri.go:89] found id: ""
	I0729 14:41:41.935429 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.935441 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:41.935449 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:41.935513 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:41.972397 1039759 cri.go:89] found id: ""
	I0729 14:41:41.972437 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.972448 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:41.972456 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:41.972525 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:42.006477 1039759 cri.go:89] found id: ""
	I0729 14:41:42.006503 1039759 logs.go:276] 0 containers: []
	W0729 14:41:42.006513 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:42.006526 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:42.006540 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:42.053853 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:42.053886 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:42.067143 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:42.067172 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:42.135406 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:42.135432 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:42.135449 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:42.212571 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:42.212603 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:44.751283 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:44.764600 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:44.764688 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:44.800821 1039759 cri.go:89] found id: ""
	I0729 14:41:44.800850 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.800857 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:44.800863 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:44.800924 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:44.834638 1039759 cri.go:89] found id: ""
	I0729 14:41:44.834670 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.834680 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:44.834686 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:44.834744 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:44.870198 1039759 cri.go:89] found id: ""
	I0729 14:41:44.870225 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.870237 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:44.870245 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:44.870312 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:44.904588 1039759 cri.go:89] found id: ""
	I0729 14:41:44.904620 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.904631 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:44.904639 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:44.904713 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:44.939442 1039759 cri.go:89] found id: ""
	I0729 14:41:44.939467 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.939474 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:44.939480 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:44.939541 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:44.972771 1039759 cri.go:89] found id: ""
	I0729 14:41:44.972799 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.972808 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:44.972815 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:44.972888 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:45.007513 1039759 cri.go:89] found id: ""
	I0729 14:41:45.007540 1039759 logs.go:276] 0 containers: []
	W0729 14:41:45.007549 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:45.007557 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:45.007626 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:45.038752 1039759 cri.go:89] found id: ""
	I0729 14:41:45.038778 1039759 logs.go:276] 0 containers: []
	W0729 14:41:45.038787 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:45.038797 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:45.038821 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:45.089807 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:45.089838 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:45.103188 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:45.103221 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:45.174509 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:45.174532 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:45.174554 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:45.255288 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:45.255327 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:47.799207 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:47.814781 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:47.814866 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:47.855111 1039759 cri.go:89] found id: ""
	I0729 14:41:47.855143 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.855156 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:47.855164 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:47.855230 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:47.892542 1039759 cri.go:89] found id: ""
	I0729 14:41:47.892577 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.892589 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:47.892603 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:47.892674 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:47.933408 1039759 cri.go:89] found id: ""
	I0729 14:41:47.933439 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.933451 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:47.933458 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:47.933531 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:47.970397 1039759 cri.go:89] found id: ""
	I0729 14:41:47.970427 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.970439 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:47.970447 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:47.970514 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:48.006852 1039759 cri.go:89] found id: ""
	I0729 14:41:48.006880 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.006891 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:48.006899 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:48.006967 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:48.046766 1039759 cri.go:89] found id: ""
	I0729 14:41:48.046799 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.046811 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:48.046820 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:48.046893 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:48.084354 1039759 cri.go:89] found id: ""
	I0729 14:41:48.084380 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.084387 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:48.084393 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:48.084468 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:48.121526 1039759 cri.go:89] found id: ""
	I0729 14:41:48.121559 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.121571 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:48.121582 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:48.121606 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:48.136753 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:48.136784 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:48.206914 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:48.206942 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:48.206958 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:48.283843 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:48.283882 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:48.325845 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:48.325878 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:50.881346 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:50.894098 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:50.894177 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:50.927345 1039759 cri.go:89] found id: ""
	I0729 14:41:50.927375 1039759 logs.go:276] 0 containers: []
	W0729 14:41:50.927386 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:50.927399 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:50.927466 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:50.962700 1039759 cri.go:89] found id: ""
	I0729 14:41:50.962726 1039759 logs.go:276] 0 containers: []
	W0729 14:41:50.962734 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:50.962740 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:50.962804 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:50.997299 1039759 cri.go:89] found id: ""
	I0729 14:41:50.997334 1039759 logs.go:276] 0 containers: []
	W0729 14:41:50.997346 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:50.997354 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:50.997419 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:51.030157 1039759 cri.go:89] found id: ""
	I0729 14:41:51.030190 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.030202 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:51.030211 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:51.030288 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:51.063123 1039759 cri.go:89] found id: ""
	I0729 14:41:51.063151 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.063162 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:51.063170 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:51.063237 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:51.096772 1039759 cri.go:89] found id: ""
	I0729 14:41:51.096819 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.096830 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:51.096838 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:51.096912 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:51.131976 1039759 cri.go:89] found id: ""
	I0729 14:41:51.132004 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.132014 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:51.132022 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:51.132095 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:51.167560 1039759 cri.go:89] found id: ""
	I0729 14:41:51.167599 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.167610 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:51.167622 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:51.167640 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:51.229416 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:51.229455 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:51.243576 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:51.243604 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:51.311103 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:51.311123 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:51.311139 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:51.396369 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:51.396432 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:53.942329 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:53.955960 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:53.956027 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:53.988039 1039759 cri.go:89] found id: ""
	I0729 14:41:53.988074 1039759 logs.go:276] 0 containers: []
	W0729 14:41:53.988085 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:53.988094 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:53.988162 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:54.020948 1039759 cri.go:89] found id: ""
	I0729 14:41:54.020981 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.020992 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:54.020999 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:54.021067 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:54.053716 1039759 cri.go:89] found id: ""
	I0729 14:41:54.053744 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.053752 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:54.053759 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:54.053811 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:54.092348 1039759 cri.go:89] found id: ""
	I0729 14:41:54.092378 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.092390 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:54.092398 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:54.092471 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:54.126114 1039759 cri.go:89] found id: ""
	I0729 14:41:54.126176 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.126189 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:54.126199 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:54.126316 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:54.162125 1039759 cri.go:89] found id: ""
	I0729 14:41:54.162157 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.162167 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:54.162174 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:54.162241 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:54.202407 1039759 cri.go:89] found id: ""
	I0729 14:41:54.202439 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.202448 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:54.202456 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:54.202522 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:54.238650 1039759 cri.go:89] found id: ""
	I0729 14:41:54.238684 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.238695 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:54.238704 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:54.238718 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:54.291200 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:54.291243 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:54.306381 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:54.306415 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:54.371355 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:54.371384 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:54.371399 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:54.455200 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:54.455237 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:56.994689 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:57.007893 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:57.007958 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:57.041775 1039759 cri.go:89] found id: ""
	I0729 14:41:57.041808 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.041820 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:57.041828 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:57.041894 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:57.075409 1039759 cri.go:89] found id: ""
	I0729 14:41:57.075442 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.075454 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:57.075462 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:57.075524 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:57.120963 1039759 cri.go:89] found id: ""
	I0729 14:41:57.121000 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.121011 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:57.121019 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:57.121088 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:57.164882 1039759 cri.go:89] found id: ""
	I0729 14:41:57.164912 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.164923 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:57.164932 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:57.165001 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:57.198511 1039759 cri.go:89] found id: ""
	I0729 14:41:57.198537 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.198545 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:57.198550 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:57.198604 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:57.238516 1039759 cri.go:89] found id: ""
	I0729 14:41:57.238544 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.238552 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:57.238559 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:57.238622 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:57.271823 1039759 cri.go:89] found id: ""
	I0729 14:41:57.271854 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.271865 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:57.271873 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:57.271937 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:57.308435 1039759 cri.go:89] found id: ""
	I0729 14:41:57.308460 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.308472 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:57.308483 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:57.308506 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:57.359783 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:57.359818 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:57.372669 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:57.372698 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:57.440979 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:57.441004 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:57.441018 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:57.520105 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:57.520139 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:00.060542 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:00.076125 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:00.076192 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:00.113095 1039759 cri.go:89] found id: ""
	I0729 14:42:00.113129 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.113137 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:00.113150 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:00.113206 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:00.154104 1039759 cri.go:89] found id: ""
	I0729 14:42:00.154132 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.154139 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:00.154146 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:00.154202 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:00.190416 1039759 cri.go:89] found id: ""
	I0729 14:42:00.190443 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.190454 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:00.190462 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:00.190532 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:00.228138 1039759 cri.go:89] found id: ""
	I0729 14:42:00.228173 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.228185 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:00.228192 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:00.228261 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:00.265679 1039759 cri.go:89] found id: ""
	I0729 14:42:00.265706 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.265715 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:00.265721 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:00.265787 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:00.300283 1039759 cri.go:89] found id: ""
	I0729 14:42:00.300315 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.300333 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:00.300341 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:00.300433 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:00.339224 1039759 cri.go:89] found id: ""
	I0729 14:42:00.339255 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.339264 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:00.339270 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:00.339333 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:00.375780 1039759 cri.go:89] found id: ""
	I0729 14:42:00.375815 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.375826 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:00.375836 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:00.375851 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:00.425145 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:00.425190 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:00.438860 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:00.438891 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:00.512668 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:00.512695 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:00.512714 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:00.597083 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:00.597139 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:03.141962 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:03.156295 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:03.156372 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:03.192860 1039759 cri.go:89] found id: ""
	I0729 14:42:03.192891 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.192902 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:03.192911 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:03.192982 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:03.234078 1039759 cri.go:89] found id: ""
	I0729 14:42:03.234104 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.234113 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:03.234119 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:03.234171 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:03.268099 1039759 cri.go:89] found id: ""
	I0729 14:42:03.268124 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.268131 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:03.268138 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:03.268197 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:03.306470 1039759 cri.go:89] found id: ""
	I0729 14:42:03.306498 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.306507 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:03.306513 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:03.306596 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:03.341902 1039759 cri.go:89] found id: ""
	I0729 14:42:03.341933 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.341944 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:03.341952 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:03.342019 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:03.377235 1039759 cri.go:89] found id: ""
	I0729 14:42:03.377271 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.377282 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:03.377291 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:03.377355 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:03.411273 1039759 cri.go:89] found id: ""
	I0729 14:42:03.411308 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.411316 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:03.411322 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:03.411397 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:03.446482 1039759 cri.go:89] found id: ""
	I0729 14:42:03.446511 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.446519 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:03.446530 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:03.446545 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:03.460222 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:03.460262 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:03.548149 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:03.548175 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:03.548191 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:03.640563 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:03.640608 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:03.681685 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:03.681713 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:06.234967 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:06.249656 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:06.249726 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:06.284768 1039759 cri.go:89] found id: ""
	I0729 14:42:06.284798 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.284810 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:06.284822 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:06.284880 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:06.321109 1039759 cri.go:89] found id: ""
	I0729 14:42:06.321140 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.321150 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:06.321158 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:06.321229 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:06.357238 1039759 cri.go:89] found id: ""
	I0729 14:42:06.357269 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.357278 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:06.357284 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:06.357342 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:06.391613 1039759 cri.go:89] found id: ""
	I0729 14:42:06.391643 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.391653 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:06.391661 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:06.391726 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:06.428782 1039759 cri.go:89] found id: ""
	I0729 14:42:06.428813 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.428823 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:06.428831 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:06.428890 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:06.463558 1039759 cri.go:89] found id: ""
	I0729 14:42:06.463596 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.463607 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:06.463615 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:06.463683 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:06.500442 1039759 cri.go:89] found id: ""
	I0729 14:42:06.500474 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.500484 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:06.500501 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:06.500579 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:06.535589 1039759 cri.go:89] found id: ""
	I0729 14:42:06.535627 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.535638 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:06.535650 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:06.535668 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:06.584641 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:06.584676 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:06.597702 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:06.597737 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:06.664499 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:06.664537 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:06.664555 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:06.744808 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:06.744845 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:09.286151 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:09.307822 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:09.307892 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:09.369334 1039759 cri.go:89] found id: ""
	I0729 14:42:09.369363 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.369373 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:09.369381 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:09.369458 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:09.402302 1039759 cri.go:89] found id: ""
	I0729 14:42:09.402334 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.402345 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:09.402353 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:09.402423 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:09.436351 1039759 cri.go:89] found id: ""
	I0729 14:42:09.436380 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.436402 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:09.436429 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:09.436501 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:09.467735 1039759 cri.go:89] found id: ""
	I0729 14:42:09.467768 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.467780 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:09.467788 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:09.467849 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:09.503328 1039759 cri.go:89] found id: ""
	I0729 14:42:09.503355 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.503367 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:09.503376 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:09.503438 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:09.540012 1039759 cri.go:89] found id: ""
	I0729 14:42:09.540039 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.540047 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:09.540053 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:09.540106 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:09.576737 1039759 cri.go:89] found id: ""
	I0729 14:42:09.576801 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.576814 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:09.576822 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:09.576920 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:09.614624 1039759 cri.go:89] found id: ""
	I0729 14:42:09.614651 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.614659 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:09.614669 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:09.614684 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:09.650533 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:09.650580 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:09.709144 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:09.709175 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:09.724147 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:09.724173 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:09.790737 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:09.790760 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:09.790775 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:12.376968 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:12.390344 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:12.390409 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:12.424820 1039759 cri.go:89] found id: ""
	I0729 14:42:12.424849 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.424860 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:12.424876 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:12.424943 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:12.457444 1039759 cri.go:89] found id: ""
	I0729 14:42:12.457480 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.457492 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:12.457500 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:12.457561 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:12.490027 1039759 cri.go:89] found id: ""
	I0729 14:42:12.490058 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.490069 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:12.490077 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:12.490145 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:12.523229 1039759 cri.go:89] found id: ""
	I0729 14:42:12.523256 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.523265 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:12.523270 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:12.523321 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:12.557849 1039759 cri.go:89] found id: ""
	I0729 14:42:12.557875 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.557885 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:12.557891 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:12.557951 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:12.592943 1039759 cri.go:89] found id: ""
	I0729 14:42:12.592973 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.592982 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:12.592989 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:12.593059 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:12.626495 1039759 cri.go:89] found id: ""
	I0729 14:42:12.626531 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.626539 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:12.626557 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:12.626641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:12.663764 1039759 cri.go:89] found id: ""
	I0729 14:42:12.663793 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.663805 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:12.663818 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:12.663835 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:12.722521 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:12.722556 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:12.736476 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:12.736505 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:12.809582 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:12.809617 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:12.809637 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:12.890665 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:12.890712 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:15.429702 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:15.443258 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:15.443340 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:15.477170 1039759 cri.go:89] found id: ""
	I0729 14:42:15.477198 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.477207 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:15.477212 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:15.477266 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:15.511614 1039759 cri.go:89] found id: ""
	I0729 14:42:15.511652 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.511665 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:15.511671 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:15.511739 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:15.548472 1039759 cri.go:89] found id: ""
	I0729 14:42:15.548501 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.548511 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:15.548519 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:15.548590 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:15.589060 1039759 cri.go:89] found id: ""
	I0729 14:42:15.589090 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.589102 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:15.589110 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:15.589185 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:15.622846 1039759 cri.go:89] found id: ""
	I0729 14:42:15.622873 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.622882 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:15.622887 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:15.622943 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:15.656193 1039759 cri.go:89] found id: ""
	I0729 14:42:15.656220 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.656229 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:15.656237 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:15.656307 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:15.691301 1039759 cri.go:89] found id: ""
	I0729 14:42:15.691336 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.691348 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:15.691357 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:15.691420 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:15.729923 1039759 cri.go:89] found id: ""
	I0729 14:42:15.729963 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.729974 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:15.729988 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:15.730004 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:15.783531 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:15.783569 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:15.799590 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:15.799619 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:15.874849 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:15.874886 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:15.874901 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:15.957384 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:15.957424 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:18.497035 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:18.511538 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:18.511616 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:18.550512 1039759 cri.go:89] found id: ""
	I0729 14:42:18.550552 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.550573 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:18.550582 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:18.550642 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:18.585910 1039759 cri.go:89] found id: ""
	I0729 14:42:18.585942 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.585954 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:18.585962 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:18.586031 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:18.619680 1039759 cri.go:89] found id: ""
	I0729 14:42:18.619712 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.619722 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:18.619730 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:18.619799 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:18.651559 1039759 cri.go:89] found id: ""
	I0729 14:42:18.651592 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.651604 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:18.651613 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:18.651688 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:18.686668 1039759 cri.go:89] found id: ""
	I0729 14:42:18.686693 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.686701 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:18.686711 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:18.686764 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:18.722832 1039759 cri.go:89] found id: ""
	I0729 14:42:18.722859 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.722869 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:18.722876 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:18.722927 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:18.758261 1039759 cri.go:89] found id: ""
	I0729 14:42:18.758289 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.758302 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:18.758310 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:18.758378 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:18.795190 1039759 cri.go:89] found id: ""
	I0729 14:42:18.795216 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.795227 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:18.795237 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:18.795251 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:18.835331 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:18.835366 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:18.889707 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:18.889745 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:18.902477 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:18.902503 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:18.970712 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:18.970735 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:18.970748 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:21.552092 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:21.566581 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:21.566669 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:21.600230 1039759 cri.go:89] found id: ""
	I0729 14:42:21.600261 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.600275 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:21.600283 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:21.600346 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:21.636576 1039759 cri.go:89] found id: ""
	I0729 14:42:21.636616 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.636627 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:21.636635 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:21.636705 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:21.672944 1039759 cri.go:89] found id: ""
	I0729 14:42:21.672973 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.672984 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:21.672997 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:21.673063 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:21.708555 1039759 cri.go:89] found id: ""
	I0729 14:42:21.708582 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.708601 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:21.708613 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:21.708673 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:21.744862 1039759 cri.go:89] found id: ""
	I0729 14:42:21.744891 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.744902 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:21.744908 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:21.744973 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:21.779084 1039759 cri.go:89] found id: ""
	I0729 14:42:21.779111 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.779119 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:21.779126 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:21.779183 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:21.819931 1039759 cri.go:89] found id: ""
	I0729 14:42:21.819972 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.819981 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:21.819989 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:21.820047 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:21.855472 1039759 cri.go:89] found id: ""
	I0729 14:42:21.855500 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.855509 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:21.855522 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:21.855539 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:21.925561 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:21.925579 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:21.925596 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:22.015986 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:22.016032 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:22.059898 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:22.059935 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:22.129018 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:22.129055 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:24.645474 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:24.658107 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:24.658171 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:24.696604 1039759 cri.go:89] found id: ""
	I0729 14:42:24.696635 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.696645 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:24.696653 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:24.696725 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:24.733862 1039759 cri.go:89] found id: ""
	I0729 14:42:24.733887 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.733894 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:24.733901 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:24.733957 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:24.770614 1039759 cri.go:89] found id: ""
	I0729 14:42:24.770644 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.770656 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:24.770664 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:24.770734 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:24.806368 1039759 cri.go:89] found id: ""
	I0729 14:42:24.806394 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.806403 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:24.806408 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:24.806470 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:24.838490 1039759 cri.go:89] found id: ""
	I0729 14:42:24.838526 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.838534 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:24.838541 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:24.838596 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:24.871017 1039759 cri.go:89] found id: ""
	I0729 14:42:24.871043 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.871051 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:24.871057 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:24.871128 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:24.903281 1039759 cri.go:89] found id: ""
	I0729 14:42:24.903311 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.903322 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:24.903330 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:24.903403 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:24.937245 1039759 cri.go:89] found id: ""
	I0729 14:42:24.937279 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.937291 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:24.937304 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:24.937319 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:24.989518 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:24.989551 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:25.005021 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:25.005055 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:25.080849 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:25.080877 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:25.080893 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:25.163742 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:25.163784 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:27.706182 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:27.719350 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:27.719425 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:27.756955 1039759 cri.go:89] found id: ""
	I0729 14:42:27.756982 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.756990 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:27.756997 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:27.757054 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:27.791975 1039759 cri.go:89] found id: ""
	I0729 14:42:27.792014 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.792025 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:27.792033 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:27.792095 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:27.834188 1039759 cri.go:89] found id: ""
	I0729 14:42:27.834215 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.834223 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:27.834230 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:27.834296 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:27.867798 1039759 cri.go:89] found id: ""
	I0729 14:42:27.867834 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.867843 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:27.867851 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:27.867918 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:27.900316 1039759 cri.go:89] found id: ""
	I0729 14:42:27.900343 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.900351 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:27.900357 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:27.900422 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:27.932361 1039759 cri.go:89] found id: ""
	I0729 14:42:27.932391 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.932402 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:27.932425 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:27.932493 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:27.965530 1039759 cri.go:89] found id: ""
	I0729 14:42:27.965562 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.965573 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:27.965581 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:27.965651 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:27.999582 1039759 cri.go:89] found id: ""
	I0729 14:42:27.999608 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.999617 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:27.999626 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:27.999654 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:28.069415 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:28.069438 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:28.069454 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:28.149781 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:28.149821 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:28.190045 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:28.190072 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:28.244147 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:28.244188 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:30.758335 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:30.771788 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:30.771860 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:30.807608 1039759 cri.go:89] found id: ""
	I0729 14:42:30.807633 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.807641 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:30.807647 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:30.807709 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:30.842361 1039759 cri.go:89] found id: ""
	I0729 14:42:30.842389 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.842397 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:30.842404 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:30.842474 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:30.879123 1039759 cri.go:89] found id: ""
	I0729 14:42:30.879149 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.879157 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:30.879162 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:30.879228 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:30.913042 1039759 cri.go:89] found id: ""
	I0729 14:42:30.913072 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.913084 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:30.913092 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:30.913162 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:30.949867 1039759 cri.go:89] found id: ""
	I0729 14:42:30.949900 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.949910 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:30.949919 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:30.949988 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:30.997468 1039759 cri.go:89] found id: ""
	I0729 14:42:30.997497 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.997509 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:30.997516 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:30.997606 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:31.039611 1039759 cri.go:89] found id: ""
	I0729 14:42:31.039643 1039759 logs.go:276] 0 containers: []
	W0729 14:42:31.039654 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:31.039662 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:31.039730 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:31.085802 1039759 cri.go:89] found id: ""
	I0729 14:42:31.085839 1039759 logs.go:276] 0 containers: []
	W0729 14:42:31.085851 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:31.085862 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:31.085890 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:31.155919 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:31.155941 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:31.155954 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:31.232795 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:31.232833 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:31.270647 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:31.270682 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:31.324648 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:31.324685 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:33.839801 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:33.853358 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:33.853417 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:33.889294 1039759 cri.go:89] found id: ""
	I0729 14:42:33.889323 1039759 logs.go:276] 0 containers: []
	W0729 14:42:33.889334 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:33.889342 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:33.889413 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:33.930106 1039759 cri.go:89] found id: ""
	I0729 14:42:33.930130 1039759 logs.go:276] 0 containers: []
	W0729 14:42:33.930142 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:33.930149 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:33.930211 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:33.973607 1039759 cri.go:89] found id: ""
	I0729 14:42:33.973634 1039759 logs.go:276] 0 containers: []
	W0729 14:42:33.973646 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:33.973654 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:33.973715 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:34.010103 1039759 cri.go:89] found id: ""
	I0729 14:42:34.010133 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.010142 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:34.010149 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:34.010209 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:34.044050 1039759 cri.go:89] found id: ""
	I0729 14:42:34.044080 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.044092 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:34.044099 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:34.044174 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:34.081222 1039759 cri.go:89] found id: ""
	I0729 14:42:34.081250 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.081260 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:34.081268 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:34.081360 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:34.115837 1039759 cri.go:89] found id: ""
	I0729 14:42:34.115878 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.115891 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:34.115899 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:34.115973 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:34.151086 1039759 cri.go:89] found id: ""
	I0729 14:42:34.151116 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.151126 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:34.151139 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:34.151156 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:34.164058 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:34.164087 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:34.238481 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:34.238503 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:34.238518 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:34.316236 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:34.316279 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:34.356281 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:34.356316 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:36.910374 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:36.924907 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:36.925008 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:36.960508 1039759 cri.go:89] found id: ""
	I0729 14:42:36.960535 1039759 logs.go:276] 0 containers: []
	W0729 14:42:36.960543 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:36.960550 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:36.960631 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:36.999840 1039759 cri.go:89] found id: ""
	I0729 14:42:36.999869 1039759 logs.go:276] 0 containers: []
	W0729 14:42:36.999881 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:36.999889 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:36.999960 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:37.032801 1039759 cri.go:89] found id: ""
	I0729 14:42:37.032832 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.032840 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:37.032847 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:37.032907 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:37.066359 1039759 cri.go:89] found id: ""
	I0729 14:42:37.066386 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.066394 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:37.066401 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:37.066454 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:37.103816 1039759 cri.go:89] found id: ""
	I0729 14:42:37.103844 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.103852 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:37.103859 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:37.103922 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:37.137135 1039759 cri.go:89] found id: ""
	I0729 14:42:37.137175 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.137186 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:37.137194 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:37.137267 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:37.170819 1039759 cri.go:89] found id: ""
	I0729 14:42:37.170851 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.170863 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:37.170871 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:37.170941 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:37.206427 1039759 cri.go:89] found id: ""
	I0729 14:42:37.206456 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.206467 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:37.206478 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:37.206492 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:37.287119 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:37.287160 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:37.331090 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:37.331119 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:37.392147 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:37.392189 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:37.406017 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:37.406047 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:37.471644 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:39.972835 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:39.985878 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:39.985945 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:40.020312 1039759 cri.go:89] found id: ""
	I0729 14:42:40.020349 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.020360 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:40.020368 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:40.020456 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:40.055688 1039759 cri.go:89] found id: ""
	I0729 14:42:40.055721 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.055732 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:40.055740 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:40.055799 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:40.090432 1039759 cri.go:89] found id: ""
	I0729 14:42:40.090463 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.090472 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:40.090478 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:40.090549 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:40.127794 1039759 cri.go:89] found id: ""
	I0729 14:42:40.127823 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.127832 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:40.127838 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:40.127894 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:40.162911 1039759 cri.go:89] found id: ""
	I0729 14:42:40.162944 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.162953 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:40.162959 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:40.163020 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:40.201578 1039759 cri.go:89] found id: ""
	I0729 14:42:40.201608 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.201619 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:40.201625 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:40.201684 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:40.247314 1039759 cri.go:89] found id: ""
	I0729 14:42:40.247340 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.247348 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:40.247363 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:40.247436 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:40.285393 1039759 cri.go:89] found id: ""
	I0729 14:42:40.285422 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.285431 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:40.285440 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:40.285458 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:40.299901 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:40.299933 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:40.372774 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:40.372802 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:40.372821 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:40.454392 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:40.454447 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:40.494641 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:40.494671 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:43.046060 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:43.058790 1039759 kubeadm.go:597] duration metric: took 4m3.37086398s to restartPrimaryControlPlane
	W0729 14:42:43.058888 1039759 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 14:42:43.058920 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:42:43.544647 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:42:43.560304 1039759 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:42:43.570229 1039759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:42:43.579922 1039759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:42:43.579946 1039759 kubeadm.go:157] found existing configuration files:
	
	I0729 14:42:43.580004 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:42:43.589520 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:42:43.589591 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:42:43.600286 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:42:43.611565 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:42:43.611629 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:42:43.623432 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:42:43.633289 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:42:43.633338 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:42:43.643410 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:42:43.653723 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:42:43.653816 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:42:43.663840 1039759 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:42:43.735243 1039759 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 14:42:43.735314 1039759 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:42:43.904148 1039759 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:42:43.904310 1039759 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:42:43.904480 1039759 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 14:42:44.101401 1039759 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:42:44.103392 1039759 out.go:204]   - Generating certificates and keys ...
	I0729 14:42:44.103499 1039759 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:42:44.103580 1039759 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:42:44.103693 1039759 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:42:44.103829 1039759 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:42:44.103944 1039759 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:42:44.104054 1039759 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:42:44.104146 1039759 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:42:44.104360 1039759 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:42:44.104599 1039759 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:42:44.105264 1039759 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:42:44.105363 1039759 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:42:44.105461 1039759 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:42:44.426107 1039759 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:42:44.593004 1039759 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:42:44.845387 1039759 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:42:44.934634 1039759 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:42:44.959808 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:42:44.961918 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:42:44.961990 1039759 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:42:45.117986 1039759 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:42:45.119775 1039759 out.go:204]   - Booting up control plane ...
	I0729 14:42:45.119913 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:42:45.121333 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:42:45.123001 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:42:45.123783 1039759 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:42:45.126031 1039759 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 14:43:25.126835 1039759 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 14:43:25.127033 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:43:25.127306 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:43:30.127504 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:43:30.127777 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:43:40.128244 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:43:40.128447 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:44:00.129004 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:44:00.129267 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:44:40.130597 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:44:40.130831 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:44:40.130848 1039759 kubeadm.go:310] 
	I0729 14:44:40.130903 1039759 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 14:44:40.130956 1039759 kubeadm.go:310] 		timed out waiting for the condition
	I0729 14:44:40.130966 1039759 kubeadm.go:310] 
	I0729 14:44:40.131032 1039759 kubeadm.go:310] 	This error is likely caused by:
	I0729 14:44:40.131110 1039759 kubeadm.go:310] 		- The kubelet is not running
	I0729 14:44:40.131256 1039759 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 14:44:40.131270 1039759 kubeadm.go:310] 
	I0729 14:44:40.131450 1039759 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 14:44:40.131499 1039759 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 14:44:40.131542 1039759 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 14:44:40.131552 1039759 kubeadm.go:310] 
	I0729 14:44:40.131686 1039759 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 14:44:40.131795 1039759 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 14:44:40.131806 1039759 kubeadm.go:310] 
	I0729 14:44:40.131947 1039759 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 14:44:40.132064 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 14:44:40.132162 1039759 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 14:44:40.132254 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 14:44:40.132264 1039759 kubeadm.go:310] 
	I0729 14:44:40.133208 1039759 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:44:40.133363 1039759 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 14:44:40.133468 1039759 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 14:44:40.133610 1039759 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 14:44:40.133676 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:44:40.607039 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:44:40.623771 1039759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:44:40.636278 1039759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:44:40.636310 1039759 kubeadm.go:157] found existing configuration files:
	
	I0729 14:44:40.636371 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:44:40.647768 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:44:40.647827 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:44:40.658281 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:44:40.668393 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:44:40.668477 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:44:40.678521 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:44:40.687891 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:44:40.687960 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:44:40.698384 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:44:40.708965 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:44:40.709047 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:44:40.719665 1039759 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:44:40.796786 1039759 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 14:44:40.796883 1039759 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:44:40.946106 1039759 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:44:40.946258 1039759 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:44:40.946388 1039759 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 14:44:41.140483 1039759 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:44:41.142390 1039759 out.go:204]   - Generating certificates and keys ...
	I0729 14:44:41.142503 1039759 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:44:41.142610 1039759 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:44:41.142722 1039759 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:44:41.142811 1039759 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:44:41.142910 1039759 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:44:41.142995 1039759 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:44:41.143092 1039759 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:44:41.143180 1039759 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:44:41.143279 1039759 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:44:41.143390 1039759 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:44:41.143445 1039759 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:44:41.143524 1039759 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:44:41.188854 1039759 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:44:41.329957 1039759 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:44:41.968599 1039759 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:44:42.034788 1039759 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:44:42.055543 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:44:42.056622 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:44:42.056715 1039759 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:44:42.204165 1039759 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:44:42.205935 1039759 out.go:204]   - Booting up control plane ...
	I0729 14:44:42.206076 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:44:42.216259 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:44:42.217947 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:44:42.219361 1039759 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:44:42.221672 1039759 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 14:45:22.223830 1039759 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 14:45:22.223940 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:22.224139 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:45:27.224303 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:27.224574 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:45:37.224905 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:37.225090 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:45:57.226285 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:57.226533 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:46:37.227279 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:46:37.227485 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:46:37.227494 1039759 kubeadm.go:310] 
	I0729 14:46:37.227528 1039759 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 14:46:37.227605 1039759 kubeadm.go:310] 		timed out waiting for the condition
	I0729 14:46:37.227627 1039759 kubeadm.go:310] 
	I0729 14:46:37.227683 1039759 kubeadm.go:310] 	This error is likely caused by:
	I0729 14:46:37.227732 1039759 kubeadm.go:310] 		- The kubelet is not running
	I0729 14:46:37.227861 1039759 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 14:46:37.227870 1039759 kubeadm.go:310] 
	I0729 14:46:37.228011 1039759 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 14:46:37.228093 1039759 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 14:46:37.228140 1039759 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 14:46:37.228173 1039759 kubeadm.go:310] 
	I0729 14:46:37.228310 1039759 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 14:46:37.228443 1039759 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 14:46:37.228454 1039759 kubeadm.go:310] 
	I0729 14:46:37.228606 1039759 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 14:46:37.228714 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 14:46:37.228821 1039759 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 14:46:37.228913 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 14:46:37.228934 1039759 kubeadm.go:310] 
	I0729 14:46:37.229926 1039759 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:46:37.230070 1039759 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 14:46:37.230175 1039759 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 14:46:37.230284 1039759 kubeadm.go:394] duration metric: took 7m57.608522587s to StartCluster
	I0729 14:46:37.230347 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:46:37.230435 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:46:37.276238 1039759 cri.go:89] found id: ""
	I0729 14:46:37.276294 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.276304 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:46:37.276317 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:46:37.276439 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:46:37.309934 1039759 cri.go:89] found id: ""
	I0729 14:46:37.309960 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.309969 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:46:37.309975 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:46:37.310031 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:46:37.343286 1039759 cri.go:89] found id: ""
	I0729 14:46:37.343312 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.343320 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:46:37.343325 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:46:37.343384 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:46:37.378735 1039759 cri.go:89] found id: ""
	I0729 14:46:37.378763 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.378773 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:46:37.378779 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:46:37.378834 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:46:37.414244 1039759 cri.go:89] found id: ""
	I0729 14:46:37.414275 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.414284 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:46:37.414290 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:46:37.414372 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:46:37.453809 1039759 cri.go:89] found id: ""
	I0729 14:46:37.453842 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.453858 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:46:37.453866 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:46:37.453955 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:46:37.492250 1039759 cri.go:89] found id: ""
	I0729 14:46:37.492279 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.492288 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:46:37.492294 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:46:37.492360 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:46:37.554342 1039759 cri.go:89] found id: ""
	I0729 14:46:37.554377 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.554388 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:46:37.554404 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:46:37.554422 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:46:37.631118 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:46:37.631165 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:46:37.650991 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:46:37.651047 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:46:37.731852 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:46:37.731880 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:46:37.731897 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:46:37.849049 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:46:37.849092 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0729 14:46:37.893957 1039759 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 14:46:37.894031 1039759 out.go:239] * 
	* 
	W0729 14:46:37.894120 1039759 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 14:46:37.894150 1039759 out.go:239] * 
	* 
	W0729 14:46:37.895278 1039759 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 14:46:37.898735 1039759 out.go:177] 
	W0729 14:46:37.900049 1039759 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 14:46:37.900115 1039759 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 14:46:37.900146 1039759 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 14:46:37.901531 1039759 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-360866 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-360866 -n old-k8s-version-360866
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-360866 -n old-k8s-version-360866: exit status 2 (235.010871ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-360866 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-360866 logs -n 25: (1.601634358s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-513289 sudo cat                             | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo                                 | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo                                 | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo                                 | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo find                            | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo crio                            | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-513289                                      | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	| delete  | -p                                                     | disable-driver-mounts-054967 | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | disable-driver-mounts-054967                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:31 UTC |
	|         | default-k8s-diff-port-751306                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-603534             | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:30 UTC | 29 Jul 24 14:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-603534                                   | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-668123            | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC | 29 Jul 24 14:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-668123                                  | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-751306  | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC | 29 Jul 24 14:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC |                     |
	|         | default-k8s-diff-port-751306                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-603534                  | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-603534 --memory=2200                     | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:32 UTC | 29 Jul 24 14:44 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-360866        | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:33 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-668123                 | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-668123                                  | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:33 UTC | 29 Jul 24 14:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-751306       | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC | 29 Jul 24 14:43 UTC |
	|         | default-k8s-diff-port-751306                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-360866                              | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC | 29 Jul 24 14:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-360866             | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC | 29 Jul 24 14:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-360866                              | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 14:34:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 14:34:53.874295 1039759 out.go:291] Setting OutFile to fd 1 ...
	I0729 14:34:53.874567 1039759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:34:53.874577 1039759 out.go:304] Setting ErrFile to fd 2...
	I0729 14:34:53.874580 1039759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:34:53.874774 1039759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 14:34:53.875294 1039759 out.go:298] Setting JSON to false
	I0729 14:34:53.876313 1039759 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":15446,"bootTime":1722248248,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 14:34:53.876373 1039759 start.go:139] virtualization: kvm guest
	I0729 14:34:53.878446 1039759 out.go:177] * [old-k8s-version-360866] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 14:34:53.879820 1039759 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 14:34:53.879855 1039759 notify.go:220] Checking for updates...
	I0729 14:34:53.882201 1039759 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 14:34:53.883330 1039759 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:34:53.884514 1039759 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:34:53.885734 1039759 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 14:34:53.886894 1039759 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 14:34:53.888361 1039759 config.go:182] Loaded profile config "old-k8s-version-360866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 14:34:53.888789 1039759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:34:53.888850 1039759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:34:53.903960 1039759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I0729 14:34:53.904467 1039759 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:34:53.905083 1039759 main.go:141] libmachine: Using API Version  1
	I0729 14:34:53.905112 1039759 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:34:53.905449 1039759 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:34:53.905609 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:34:53.907360 1039759 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 14:34:53.908710 1039759 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 14:34:53.909026 1039759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:34:53.909064 1039759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:34:53.923834 1039759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0729 14:34:53.924300 1039759 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:34:53.924787 1039759 main.go:141] libmachine: Using API Version  1
	I0729 14:34:53.924809 1039759 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:34:53.925150 1039759 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:34:53.925352 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:34:53.960368 1039759 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 14:34:53.961649 1039759 start.go:297] selected driver: kvm2
	I0729 14:34:53.961662 1039759 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:34:53.961778 1039759 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 14:34:53.962398 1039759 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:34:53.962459 1039759 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19338-974764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 14:34:53.977941 1039759 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 14:34:53.978311 1039759 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:34:53.978341 1039759 cni.go:84] Creating CNI manager for ""
	I0729 14:34:53.978350 1039759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:34:53.978395 1039759 start.go:340] cluster config:
	{Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:34:53.978499 1039759 iso.go:125] acquiring lock: {Name:mk2bc72146110e230952d77b90cad2ea8182c9d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:34:53.980167 1039759 out.go:177] * Starting "old-k8s-version-360866" primary control-plane node in "old-k8s-version-360866" cluster
	I0729 14:34:55.588663 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:34:53.981356 1039759 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 14:34:53.981390 1039759 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 14:34:53.981400 1039759 cache.go:56] Caching tarball of preloaded images
	I0729 14:34:53.981477 1039759 preload.go:172] Found /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 14:34:53.981487 1039759 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 14:34:53.981600 1039759 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/config.json ...
	I0729 14:34:53.981775 1039759 start.go:360] acquireMachinesLock for old-k8s-version-360866: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 14:34:58.660730 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:04.740665 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:07.812781 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:13.892659 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:16.964692 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:23.044749 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:26.116761 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:32.196730 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:35.268709 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:41.348712 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:44.420693 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:50.500715 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:53.572717 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:59.652707 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:02.724722 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:08.804719 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:11.876665 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:17.956684 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:21.028707 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:27.108667 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:30.180710 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:36.260645 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:39.332717 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:45.412694 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:48.484713 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:54.564703 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:57.636707 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:03.716690 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:06.788660 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:12.868658 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:15.940708 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:22.020684 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:25.092712 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:31.172710 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:34.177216 1039263 start.go:364] duration metric: took 3m42.890532077s to acquireMachinesLock for "embed-certs-668123"
	I0729 14:37:34.177291 1039263 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:37:34.177300 1039263 fix.go:54] fixHost starting: 
	I0729 14:37:34.177641 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:37:34.177673 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:37:34.193427 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37577
	I0729 14:37:34.193879 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:37:34.194396 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:37:34.194421 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:37:34.194774 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:37:34.194987 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:34.195156 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:37:34.196597 1039263 fix.go:112] recreateIfNeeded on embed-certs-668123: state=Stopped err=<nil>
	I0729 14:37:34.196642 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	W0729 14:37:34.196802 1039263 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:37:34.198564 1039263 out.go:177] * Restarting existing kvm2 VM for "embed-certs-668123" ...
	I0729 14:37:34.199926 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Start
	I0729 14:37:34.200086 1039263 main.go:141] libmachine: (embed-certs-668123) Ensuring networks are active...
	I0729 14:37:34.200833 1039263 main.go:141] libmachine: (embed-certs-668123) Ensuring network default is active
	I0729 14:37:34.201159 1039263 main.go:141] libmachine: (embed-certs-668123) Ensuring network mk-embed-certs-668123 is active
	I0729 14:37:34.201578 1039263 main.go:141] libmachine: (embed-certs-668123) Getting domain xml...
	I0729 14:37:34.202214 1039263 main.go:141] libmachine: (embed-certs-668123) Creating domain...
	I0729 14:37:34.510575 1039263 main.go:141] libmachine: (embed-certs-668123) Waiting to get IP...
	I0729 14:37:34.511459 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:34.511909 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:34.512006 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:34.511904 1040307 retry.go:31] will retry after 294.890973ms: waiting for machine to come up
	I0729 14:37:34.808513 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:34.809044 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:34.809070 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:34.809007 1040307 retry.go:31] will retry after 296.152247ms: waiting for machine to come up
	I0729 14:37:35.106423 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:35.106839 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:35.106872 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:35.106773 1040307 retry.go:31] will retry after 384.830082ms: waiting for machine to come up
	I0729 14:37:35.493463 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:35.493902 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:35.493933 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:35.493861 1040307 retry.go:31] will retry after 490.673812ms: waiting for machine to come up
	I0729 14:37:35.986675 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:35.987184 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:35.987235 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:35.987099 1040307 retry.go:31] will retry after 725.022775ms: waiting for machine to come up
	I0729 14:37:34.174673 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:37:34.174713 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:37:34.175060 1038758 buildroot.go:166] provisioning hostname "no-preload-603534"
	I0729 14:37:34.175084 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:37:34.175279 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:37:34.177042 1038758 machine.go:97] duration metric: took 4m37.39644293s to provisionDockerMachine
	I0729 14:37:34.177087 1038758 fix.go:56] duration metric: took 4m37.417815827s for fixHost
	I0729 14:37:34.177094 1038758 start.go:83] releasing machines lock for "no-preload-603534", held for 4m37.417912853s
	W0729 14:37:34.177127 1038758 start.go:714] error starting host: provision: host is not running
	W0729 14:37:34.177230 1038758 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 14:37:34.177240 1038758 start.go:729] Will try again in 5 seconds ...
	I0729 14:37:36.714078 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:36.714502 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:36.714565 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:36.714389 1040307 retry.go:31] will retry after 722.684756ms: waiting for machine to come up
	I0729 14:37:37.438316 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:37.438859 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:37.438891 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:37.438802 1040307 retry.go:31] will retry after 1.163999997s: waiting for machine to come up
	I0729 14:37:38.604109 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:38.604503 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:38.604531 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:38.604469 1040307 retry.go:31] will retry after 1.401566003s: waiting for machine to come up
	I0729 14:37:40.007310 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:40.007900 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:40.007929 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:40.007839 1040307 retry.go:31] will retry after 1.40470791s: waiting for machine to come up
	I0729 14:37:39.178982 1038758 start.go:360] acquireMachinesLock for no-preload-603534: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 14:37:41.414509 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:41.415018 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:41.415049 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:41.414959 1040307 retry.go:31] will retry after 2.205183048s: waiting for machine to come up
	I0729 14:37:43.623427 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:43.623894 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:43.623922 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:43.623856 1040307 retry.go:31] will retry after 2.444881913s: waiting for machine to come up
	I0729 14:37:46.070961 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:46.071314 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:46.071338 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:46.071271 1040307 retry.go:31] will retry after 3.115189863s: waiting for machine to come up
	I0729 14:37:49.187610 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:49.188107 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:49.188134 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:49.188054 1040307 retry.go:31] will retry after 3.139484284s: waiting for machine to come up
	I0729 14:37:53.653416 1039440 start.go:364] duration metric: took 3m41.12464482s to acquireMachinesLock for "default-k8s-diff-port-751306"
	I0729 14:37:53.653486 1039440 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:37:53.653494 1039440 fix.go:54] fixHost starting: 
	I0729 14:37:53.653880 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:37:53.653913 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:37:53.671499 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34797
	I0729 14:37:53.671927 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:37:53.672550 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:37:53.672584 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:37:53.672986 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:37:53.673198 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:37:53.673353 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:37:53.674706 1039440 fix.go:112] recreateIfNeeded on default-k8s-diff-port-751306: state=Stopped err=<nil>
	I0729 14:37:53.674736 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	W0729 14:37:53.674896 1039440 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:37:53.677098 1039440 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-751306" ...
	I0729 14:37:52.329477 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.329880 1039263 main.go:141] libmachine: (embed-certs-668123) Found IP for machine: 192.168.50.53
	I0729 14:37:52.329895 1039263 main.go:141] libmachine: (embed-certs-668123) Reserving static IP address...
	I0729 14:37:52.329906 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has current primary IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.330376 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "embed-certs-668123", mac: "52:54:00:a3:92:a4", ip: "192.168.50.53"} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.330414 1039263 main.go:141] libmachine: (embed-certs-668123) Reserved static IP address: 192.168.50.53
	I0729 14:37:52.330433 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | skip adding static IP to network mk-embed-certs-668123 - found existing host DHCP lease matching {name: "embed-certs-668123", mac: "52:54:00:a3:92:a4", ip: "192.168.50.53"}
	I0729 14:37:52.330453 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Getting to WaitForSSH function...
	I0729 14:37:52.330465 1039263 main.go:141] libmachine: (embed-certs-668123) Waiting for SSH to be available...
	I0729 14:37:52.332510 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.332794 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.332821 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.332897 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Using SSH client type: external
	I0729 14:37:52.332931 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa (-rw-------)
	I0729 14:37:52.332963 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:37:52.332976 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | About to run SSH command:
	I0729 14:37:52.332989 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | exit 0
	I0729 14:37:52.456152 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | SSH cmd err, output: <nil>: 
	I0729 14:37:52.456532 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetConfigRaw
	I0729 14:37:52.457156 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetIP
	I0729 14:37:52.459620 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.459946 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.459980 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.460200 1039263 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/config.json ...
	I0729 14:37:52.460384 1039263 machine.go:94] provisionDockerMachine start ...
	I0729 14:37:52.460404 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:52.460672 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:52.462798 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.463089 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.463119 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.463260 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:52.463428 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.463594 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.463703 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:52.463856 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:52.464071 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:52.464080 1039263 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:37:52.564925 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 14:37:52.564959 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetMachineName
	I0729 14:37:52.565214 1039263 buildroot.go:166] provisioning hostname "embed-certs-668123"
	I0729 14:37:52.565241 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetMachineName
	I0729 14:37:52.565472 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:52.568131 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.568450 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.568482 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.568615 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:52.568825 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.568975 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.569143 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:52.569335 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:52.569511 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:52.569522 1039263 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-668123 && echo "embed-certs-668123" | sudo tee /etc/hostname
	I0729 14:37:52.686424 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-668123
	
	I0729 14:37:52.686459 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:52.689074 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.689387 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.689422 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.689619 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:52.689825 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.689999 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.690164 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:52.690338 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:52.690511 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:52.690526 1039263 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-668123' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-668123/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-668123' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:37:52.801778 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:37:52.801812 1039263 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:37:52.801841 1039263 buildroot.go:174] setting up certificates
	I0729 14:37:52.801851 1039263 provision.go:84] configureAuth start
	I0729 14:37:52.801863 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetMachineName
	I0729 14:37:52.802133 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetIP
	I0729 14:37:52.804526 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.804877 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.804910 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.805053 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:52.807140 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.807369 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.807395 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.807505 1039263 provision.go:143] copyHostCerts
	I0729 14:37:52.807594 1039263 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:37:52.807608 1039263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:37:52.807698 1039263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:37:52.807840 1039263 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:37:52.807852 1039263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:37:52.807891 1039263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:37:52.807969 1039263 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:37:52.807979 1039263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:37:52.808011 1039263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:37:52.808084 1039263 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.embed-certs-668123 san=[127.0.0.1 192.168.50.53 embed-certs-668123 localhost minikube]
	I0729 14:37:53.007382 1039263 provision.go:177] copyRemoteCerts
	I0729 14:37:53.007459 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:37:53.007548 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.010097 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.010465 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.010488 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.010660 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.010864 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.011037 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.011193 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:37:53.092043 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 14:37:53.116737 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:37:53.139828 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:37:53.162813 1039263 provision.go:87] duration metric: took 360.943219ms to configureAuth
	I0729 14:37:53.162856 1039263 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:37:53.163051 1039263 config.go:182] Loaded profile config "embed-certs-668123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:37:53.163144 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.165757 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.166102 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.166130 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.166272 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.166465 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.166665 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.166817 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.166983 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:53.167154 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:53.167169 1039263 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:37:53.428217 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:37:53.428246 1039263 machine.go:97] duration metric: took 967.84942ms to provisionDockerMachine
	I0729 14:37:53.428258 1039263 start.go:293] postStartSetup for "embed-certs-668123" (driver="kvm2")
	I0729 14:37:53.428269 1039263 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:37:53.428298 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.428641 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:37:53.428669 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.431228 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.431593 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.431620 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.431797 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.431992 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.432159 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.432313 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:37:53.511226 1039263 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:37:53.515527 1039263 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:37:53.515555 1039263 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:37:53.515635 1039263 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:37:53.515724 1039263 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:37:53.515846 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:37:53.525606 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:37:53.548757 1039263 start.go:296] duration metric: took 120.484005ms for postStartSetup
	I0729 14:37:53.548798 1039263 fix.go:56] duration metric: took 19.371497305s for fixHost
	I0729 14:37:53.548827 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.551373 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.551697 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.551725 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.551866 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.552085 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.552226 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.552383 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.552574 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:53.552746 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:53.552756 1039263 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:37:53.653267 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263873.628230451
	
	I0729 14:37:53.653291 1039263 fix.go:216] guest clock: 1722263873.628230451
	I0729 14:37:53.653301 1039263 fix.go:229] Guest: 2024-07-29 14:37:53.628230451 +0000 UTC Remote: 2024-07-29 14:37:53.548802078 +0000 UTC m=+242.399919494 (delta=79.428373ms)
	I0729 14:37:53.653329 1039263 fix.go:200] guest clock delta is within tolerance: 79.428373ms
	I0729 14:37:53.653337 1039263 start.go:83] releasing machines lock for "embed-certs-668123", held for 19.476079428s
	I0729 14:37:53.653364 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.653673 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetIP
	I0729 14:37:53.656383 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.656805 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.656836 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.656958 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.657597 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.657831 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.657923 1039263 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:37:53.657981 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.658101 1039263 ssh_runner.go:195] Run: cat /version.json
	I0729 14:37:53.658129 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.660964 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.661044 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.661349 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.661374 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.661400 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.661446 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.661628 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.661711 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.661795 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.661918 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.662012 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.662092 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.662200 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:37:53.662234 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:37:53.764286 1039263 ssh_runner.go:195] Run: systemctl --version
	I0729 14:37:53.772936 1039263 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:37:53.922874 1039263 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:37:53.928953 1039263 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:37:53.929035 1039263 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:37:53.947388 1039263 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:37:53.947417 1039263 start.go:495] detecting cgroup driver to use...
	I0729 14:37:53.947496 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:37:53.964141 1039263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:37:53.985980 1039263 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:37:53.986042 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:37:54.009646 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:37:54.023449 1039263 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:37:54.139511 1039263 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:37:54.312559 1039263 docker.go:233] disabling docker service ...
	I0729 14:37:54.312655 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:37:54.327466 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:37:54.342225 1039263 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:37:54.485007 1039263 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:37:54.623987 1039263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:37:54.638100 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:37:54.658833 1039263 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 14:37:54.658911 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.670274 1039263 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:37:54.670366 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.681548 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.691626 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.701915 1039263 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:37:54.713399 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.723631 1039263 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.740625 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.751521 1039263 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:37:54.761895 1039263 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:37:54.761942 1039263 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:37:54.775663 1039263 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:37:54.785415 1039263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:37:54.933441 1039263 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:37:55.066449 1039263 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:37:55.066539 1039263 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:37:55.071614 1039263 start.go:563] Will wait 60s for crictl version
	I0729 14:37:55.071671 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:37:55.075727 1039263 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:37:55.117286 1039263 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:37:55.117395 1039263 ssh_runner.go:195] Run: crio --version
	I0729 14:37:55.145732 1039263 ssh_runner.go:195] Run: crio --version
	I0729 14:37:55.179714 1039263 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 14:37:55.181109 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetIP
	I0729 14:37:55.184274 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:55.184734 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:55.184761 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:55.185066 1039263 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 14:37:55.190374 1039263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:37:55.206768 1039263 kubeadm.go:883] updating cluster {Name:embed-certs-668123 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-668123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:37:55.207054 1039263 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 14:37:55.207130 1039263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:37:55.247814 1039263 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 14:37:55.247890 1039263 ssh_runner.go:195] Run: which lz4
	I0729 14:37:55.251992 1039263 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 14:37:55.256440 1039263 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 14:37:55.256468 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 14:37:53.678402 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Start
	I0729 14:37:53.678610 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Ensuring networks are active...
	I0729 14:37:53.679311 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Ensuring network default is active
	I0729 14:37:53.679767 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Ensuring network mk-default-k8s-diff-port-751306 is active
	I0729 14:37:53.680133 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Getting domain xml...
	I0729 14:37:53.680818 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Creating domain...
	I0729 14:37:54.024601 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting to get IP...
	I0729 14:37:54.025431 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.025838 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.025944 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:54.025837 1040438 retry.go:31] will retry after 280.254814ms: waiting for machine to come up
	I0729 14:37:54.307727 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.308260 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.308293 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:54.308220 1040438 retry.go:31] will retry after 384.348242ms: waiting for machine to come up
	I0729 14:37:54.693703 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.694304 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.694334 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:54.694251 1040438 retry.go:31] will retry after 417.76448ms: waiting for machine to come up
	I0729 14:37:55.113670 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:55.114243 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:55.114272 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:55.114191 1040438 retry.go:31] will retry after 589.741485ms: waiting for machine to come up
	I0729 14:37:55.706127 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:55.706613 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:55.706646 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:55.706569 1040438 retry.go:31] will retry after 471.427821ms: waiting for machine to come up
	I0729 14:37:56.179380 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:56.179867 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:56.179896 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:56.179814 1040438 retry.go:31] will retry after 624.275074ms: waiting for machine to come up
	I0729 14:37:56.805673 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:56.806042 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:56.806063 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:56.806018 1040438 retry.go:31] will retry after 1.027377333s: waiting for machine to come up
	I0729 14:37:56.743842 1039263 crio.go:462] duration metric: took 1.49188656s to copy over tarball
	I0729 14:37:56.743941 1039263 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 14:37:58.879573 1039263 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.135595087s)
	I0729 14:37:58.879619 1039263 crio.go:469] duration metric: took 2.135735155s to extract the tarball
	I0729 14:37:58.879628 1039263 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 14:37:58.916966 1039263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:37:58.958323 1039263 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 14:37:58.958349 1039263 cache_images.go:84] Images are preloaded, skipping loading
	I0729 14:37:58.958357 1039263 kubeadm.go:934] updating node { 192.168.50.53 8443 v1.30.3 crio true true} ...
	I0729 14:37:58.958537 1039263 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-668123 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-668123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:37:58.958632 1039263 ssh_runner.go:195] Run: crio config
	I0729 14:37:59.004120 1039263 cni.go:84] Creating CNI manager for ""
	I0729 14:37:59.004146 1039263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:37:59.004163 1039263 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:37:59.004192 1039263 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.53 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-668123 NodeName:embed-certs-668123 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 14:37:59.004371 1039263 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-668123"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:37:59.004469 1039263 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 14:37:59.014796 1039263 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:37:59.014866 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:37:59.024575 1039263 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0729 14:37:59.040707 1039263 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 14:37:59.056693 1039263 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0729 14:37:59.073320 1039263 ssh_runner.go:195] Run: grep 192.168.50.53	control-plane.minikube.internal$ /etc/hosts
	I0729 14:37:59.077226 1039263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:37:59.091283 1039263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:37:59.221532 1039263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:37:59.239319 1039263 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123 for IP: 192.168.50.53
	I0729 14:37:59.239362 1039263 certs.go:194] generating shared ca certs ...
	I0729 14:37:59.239387 1039263 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:37:59.239604 1039263 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:37:59.239654 1039263 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:37:59.239667 1039263 certs.go:256] generating profile certs ...
	I0729 14:37:59.239818 1039263 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/client.key
	I0729 14:37:59.239922 1039263 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/apiserver.key.544998fe
	I0729 14:37:59.239969 1039263 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/proxy-client.key
	I0729 14:37:59.240137 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:37:59.240188 1039263 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:37:59.240202 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:37:59.240238 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:37:59.240280 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:37:59.240313 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:37:59.240385 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:37:59.241551 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:37:59.278842 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:37:59.305668 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:37:59.332314 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:37:59.377867 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 14:37:59.405592 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 14:37:59.438073 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:37:59.462130 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 14:37:59.489158 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:37:59.511811 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:37:59.534728 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:37:59.558680 1039263 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:37:59.575404 1039263 ssh_runner.go:195] Run: openssl version
	I0729 14:37:59.581518 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:37:59.592024 1039263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:37:59.596913 1039263 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:37:59.596983 1039263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:37:59.602973 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:37:59.613891 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:37:59.624053 1039263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:37:59.628881 1039263 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:37:59.628922 1039263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:37:59.634672 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:37:59.645513 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:37:59.656385 1039263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:37:59.661141 1039263 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:37:59.661209 1039263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:37:59.667169 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:37:59.678240 1039263 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:37:59.683075 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:37:59.689013 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:37:59.694754 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:37:59.700865 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:37:59.706664 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:37:59.712457 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:37:59.718347 1039263 kubeadm.go:392] StartCluster: {Name:embed-certs-668123 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-668123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:37:59.718460 1039263 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:37:59.718505 1039263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:37:59.756046 1039263 cri.go:89] found id: ""
	I0729 14:37:59.756143 1039263 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:37:59.766198 1039263 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 14:37:59.766222 1039263 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 14:37:59.766278 1039263 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 14:37:59.775664 1039263 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:37:59.776877 1039263 kubeconfig.go:125] found "embed-certs-668123" server: "https://192.168.50.53:8443"
	I0729 14:37:59.778802 1039263 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 14:37:59.787805 1039263 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.53
	I0729 14:37:59.787840 1039263 kubeadm.go:1160] stopping kube-system containers ...
	I0729 14:37:59.787854 1039263 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 14:37:59.787908 1039263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:37:59.828927 1039263 cri.go:89] found id: ""
	I0729 14:37:59.829016 1039263 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 14:37:59.844889 1039263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:37:59.854233 1039263 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:37:59.854264 1039263 kubeadm.go:157] found existing configuration files:
	
	I0729 14:37:59.854334 1039263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:37:59.863123 1039263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:37:59.863183 1039263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:37:59.872154 1039263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:37:59.880819 1039263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:37:59.880881 1039263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:37:59.889714 1039263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:37:59.898278 1039263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:37:59.898330 1039263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:37:59.907358 1039263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:37:59.916352 1039263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:37:59.916430 1039263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:37:59.925239 1039263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:37:59.934353 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:00.045086 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:00.793783 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:01.009839 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:01.080217 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:01.153377 1039263 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:38:01.153496 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:37:57.835202 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:57.835636 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:57.835674 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:57.835572 1040438 retry.go:31] will retry after 987.763901ms: waiting for machine to come up
	I0729 14:37:58.824975 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:58.825428 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:58.825457 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:58.825348 1040438 retry.go:31] will retry after 1.189429393s: waiting for machine to come up
	I0729 14:38:00.016130 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:00.016569 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:38:00.016604 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:38:00.016509 1040438 retry.go:31] will retry after 1.424039091s: waiting for machine to come up
	I0729 14:38:01.443138 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:01.443511 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:38:01.443540 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:38:01.443470 1040438 retry.go:31] will retry after 2.531090823s: waiting for machine to come up
	I0729 14:38:01.653905 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:02.153772 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:02.653590 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:02.669429 1039263 api_server.go:72] duration metric: took 1.516051254s to wait for apiserver process to appear ...
	I0729 14:38:02.669467 1039263 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:38:02.669495 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:05.531413 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:38:05.531451 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:38:05.531467 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:05.602173 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:38:05.602205 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:38:05.670522 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:05.680835 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:05.680861 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:06.170512 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:06.176052 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:06.176084 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:06.669679 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:06.674813 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:06.674854 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:07.170539 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:07.174573 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 200:
	ok
	I0729 14:38:07.180250 1039263 api_server.go:141] control plane version: v1.30.3
	I0729 14:38:07.180275 1039263 api_server.go:131] duration metric: took 4.510799806s to wait for apiserver health ...
	I0729 14:38:07.180284 1039263 cni.go:84] Creating CNI manager for ""
	I0729 14:38:07.180290 1039263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:38:07.181866 1039263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:38:03.976004 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:03.976514 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:38:03.976544 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:38:03.976474 1040438 retry.go:31] will retry after 3.356304099s: waiting for machine to come up
	I0729 14:38:07.335600 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:07.336031 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:38:07.336086 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:38:07.335992 1040438 retry.go:31] will retry after 3.345416128s: waiting for machine to come up
	I0729 14:38:07.182966 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:38:07.193166 1039263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:38:07.212801 1039263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:38:07.221297 1039263 system_pods.go:59] 8 kube-system pods found
	I0729 14:38:07.221331 1039263 system_pods.go:61] "coredns-7db6d8ff4d-6dhzz" [c680e565-fe93-4072-8fe8-6fd440ae5675] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 14:38:07.221340 1039263 system_pods.go:61] "etcd-embed-certs-668123" [3244d6a8-3aa2-406a-86fe-9770f5b8541a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 14:38:07.221347 1039263 system_pods.go:61] "kube-apiserver-embed-certs-668123" [a00570e4-b496-4083-b280-4125643e475e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 14:38:07.221352 1039263 system_pods.go:61] "kube-controller-manager-embed-certs-668123" [cec685e1-4d5f-4210-a115-e3766c962f07] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 14:38:07.221364 1039263 system_pods.go:61] "kube-proxy-2v79q" [e43e850d-b94e-467c-bf0f-0eac3828f54f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 14:38:07.221370 1039263 system_pods.go:61] "kube-scheduler-embed-certs-668123" [4037d948-faed-49c9-b321-6a4be51b9ea9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 14:38:07.221379 1039263 system_pods.go:61] "metrics-server-569cc877fc-5msnp" [eb9cd6f7-caf5-4b18-b0d6-0f01add839ce] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:38:07.221384 1039263 system_pods.go:61] "storage-provisioner" [ecdab0df-406c-4f3c-b8fe-34a48b7f1e0a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 14:38:07.221390 1039263 system_pods.go:74] duration metric: took 8.574498ms to wait for pod list to return data ...
	I0729 14:38:07.221397 1039263 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:38:07.224197 1039263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:38:07.224220 1039263 node_conditions.go:123] node cpu capacity is 2
	I0729 14:38:07.224231 1039263 node_conditions.go:105] duration metric: took 2.829585ms to run NodePressure ...
	I0729 14:38:07.224246 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:07.520049 1039263 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 14:38:07.524228 1039263 kubeadm.go:739] kubelet initialised
	I0729 14:38:07.524251 1039263 kubeadm.go:740] duration metric: took 4.174563ms waiting for restarted kubelet to initialise ...
	I0729 14:38:07.524262 1039263 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:38:07.529174 1039263 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:07.533534 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.533554 1039263 pod_ready.go:81] duration metric: took 4.355926ms for pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:07.533562 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.533567 1039263 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:07.537529 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "etcd-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.537550 1039263 pod_ready.go:81] duration metric: took 3.975082ms for pod "etcd-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:07.537561 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "etcd-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.537567 1039263 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:07.542299 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.542325 1039263 pod_ready.go:81] duration metric: took 4.747863ms for pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:07.542371 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.542390 1039263 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:07.616688 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.616725 1039263 pod_ready.go:81] duration metric: took 74.323327ms for pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:07.616740 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.616750 1039263 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2v79q" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:08.016334 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "kube-proxy-2v79q" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.016360 1039263 pod_ready.go:81] duration metric: took 399.599984ms for pod "kube-proxy-2v79q" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:08.016369 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "kube-proxy-2v79q" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.016374 1039263 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:08.416536 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.416571 1039263 pod_ready.go:81] duration metric: took 400.189243ms for pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:08.416585 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.416594 1039263 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:08.817526 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.817561 1039263 pod_ready.go:81] duration metric: took 400.956263ms for pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:08.817572 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.817590 1039263 pod_ready.go:38] duration metric: took 1.293313082s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:38:08.817610 1039263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 14:38:08.829394 1039263 ops.go:34] apiserver oom_adj: -16
	I0729 14:38:08.829425 1039263 kubeadm.go:597] duration metric: took 9.06319609s to restartPrimaryControlPlane
	I0729 14:38:08.829436 1039263 kubeadm.go:394] duration metric: took 9.111098315s to StartCluster
	I0729 14:38:08.829457 1039263 settings.go:142] acquiring lock: {Name:mke61e73d7bb1a5bd9c2f4c9e9bba0a07b199ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:08.829548 1039263 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:38:08.831113 1039263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:08.831396 1039263 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:38:08.831441 1039263 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 14:38:08.831524 1039263 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-668123"
	I0729 14:38:08.831539 1039263 addons.go:69] Setting default-storageclass=true in profile "embed-certs-668123"
	I0729 14:38:08.831562 1039263 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-668123"
	W0729 14:38:08.831572 1039263 addons.go:243] addon storage-provisioner should already be in state true
	I0729 14:38:08.831561 1039263 addons.go:69] Setting metrics-server=true in profile "embed-certs-668123"
	I0729 14:38:08.831593 1039263 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-668123"
	I0729 14:38:08.831601 1039263 addons.go:234] Setting addon metrics-server=true in "embed-certs-668123"
	I0729 14:38:08.831609 1039263 host.go:66] Checking if "embed-certs-668123" exists ...
	W0729 14:38:08.831610 1039263 addons.go:243] addon metrics-server should already be in state true
	I0729 14:38:08.831632 1039263 config.go:182] Loaded profile config "embed-certs-668123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:38:08.831644 1039263 host.go:66] Checking if "embed-certs-668123" exists ...
	I0729 14:38:08.831916 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.831933 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.831918 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.831956 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.831945 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.831964 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.833223 1039263 out.go:177] * Verifying Kubernetes components...
	I0729 14:38:08.834403 1039263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:08.847231 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I0729 14:38:08.847362 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37467
	I0729 14:38:08.847398 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44737
	I0729 14:38:08.847797 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.847896 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.847904 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.848350 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.848371 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.848487 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.848507 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.848520 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.848540 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.848774 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.848854 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.848867 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.849010 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:38:08.849363 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.849363 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.849392 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.849416 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.851933 1039263 addons.go:234] Setting addon default-storageclass=true in "embed-certs-668123"
	W0729 14:38:08.851955 1039263 addons.go:243] addon default-storageclass should already be in state true
	I0729 14:38:08.851988 1039263 host.go:66] Checking if "embed-certs-668123" exists ...
	I0729 14:38:08.852284 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.852330 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.865255 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34389
	I0729 14:38:08.865707 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.865981 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36925
	I0729 14:38:08.866157 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.866183 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.866419 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.866531 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.866804 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:38:08.866895 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.866920 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.867272 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.867839 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.867885 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.868000 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46413
	I0729 14:38:08.868397 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.868741 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:38:08.868886 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.868903 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.869276 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.869501 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:38:08.870835 1039263 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 14:38:08.871289 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:38:08.872267 1039263 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 14:38:08.872289 1039263 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 14:38:08.872306 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:38:08.873165 1039263 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:08.874593 1039263 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:38:08.874616 1039263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 14:38:08.874635 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:38:08.875061 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.875572 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:38:08.875605 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.875815 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:38:08.876012 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:38:08.876208 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:38:08.876370 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:38:08.877997 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.878394 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:38:08.878423 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.878555 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:38:08.878726 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:38:08.878889 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:38:08.879002 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:38:08.890720 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I0729 14:38:08.891092 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.891619 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.891638 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.891972 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.892184 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:38:08.893577 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:38:08.893817 1039263 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 14:38:08.893840 1039263 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 14:38:08.893859 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:38:08.896843 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.897302 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:38:08.897320 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.897464 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:38:08.897618 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:38:08.897866 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:38:08.897966 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:38:09.019001 1039263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:38:09.038038 1039263 node_ready.go:35] waiting up to 6m0s for node "embed-certs-668123" to be "Ready" ...
	I0729 14:38:09.097896 1039263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:38:09.101844 1039263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 14:38:09.229339 1039263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 14:38:09.229360 1039263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 14:38:09.317591 1039263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 14:38:09.317625 1039263 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 14:38:09.370444 1039263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:38:09.370490 1039263 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 14:38:09.407869 1039263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:38:10.014739 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.014767 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.014873 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.014897 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.015112 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Closing plugin on server side
	I0729 14:38:10.015150 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.015157 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.015166 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.015174 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.015284 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.015297 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.015306 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.015313 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.015384 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.015413 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.015395 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Closing plugin on server side
	I0729 14:38:10.015611 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.015641 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.024010 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.024031 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.024299 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.024318 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.024343 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Closing plugin on server side
	I0729 14:38:10.233873 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.233903 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.234247 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Closing plugin on server side
	I0729 14:38:10.234260 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.234275 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.234292 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.234301 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.234546 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.234563 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.234574 1039263 addons.go:475] Verifying addon metrics-server=true in "embed-certs-668123"
	I0729 14:38:10.236215 1039263 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 14:38:10.237377 1039263 addons.go:510] duration metric: took 1.405942032s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 14:38:11.042263 1039263 node_ready.go:53] node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:12.129080 1039759 start.go:364] duration metric: took 3m18.14725367s to acquireMachinesLock for "old-k8s-version-360866"
	I0729 14:38:12.129155 1039759 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:38:12.129166 1039759 fix.go:54] fixHost starting: 
	I0729 14:38:12.129715 1039759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:12.129752 1039759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:12.146596 1039759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34517
	I0729 14:38:12.147101 1039759 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:12.147554 1039759 main.go:141] libmachine: Using API Version  1
	I0729 14:38:12.147581 1039759 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:12.147871 1039759 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:12.148094 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:12.148293 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetState
	I0729 14:38:12.149880 1039759 fix.go:112] recreateIfNeeded on old-k8s-version-360866: state=Stopped err=<nil>
	I0729 14:38:12.149918 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	W0729 14:38:12.150120 1039759 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:38:12.152003 1039759 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-360866" ...
	I0729 14:38:10.683699 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.684108 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Found IP for machine: 192.168.72.233
	I0729 14:38:10.684148 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has current primary IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.684161 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Reserving static IP address...
	I0729 14:38:10.684506 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-751306", mac: "52:54:00:9f:b9:23", ip: "192.168.72.233"} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.684540 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | skip adding static IP to network mk-default-k8s-diff-port-751306 - found existing host DHCP lease matching {name: "default-k8s-diff-port-751306", mac: "52:54:00:9f:b9:23", ip: "192.168.72.233"}
	I0729 14:38:10.684558 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Reserved static IP address: 192.168.72.233
	I0729 14:38:10.684581 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for SSH to be available...
	I0729 14:38:10.684600 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Getting to WaitForSSH function...
	I0729 14:38:10.686336 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.686684 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.686713 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.686825 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Using SSH client type: external
	I0729 14:38:10.686856 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa (-rw-------)
	I0729 14:38:10.686894 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.233 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:38:10.686904 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | About to run SSH command:
	I0729 14:38:10.686921 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | exit 0
	I0729 14:38:10.808536 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | SSH cmd err, output: <nil>: 
	I0729 14:38:10.808965 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetConfigRaw
	I0729 14:38:10.809613 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetIP
	I0729 14:38:10.812200 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.812590 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.812625 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.812862 1039440 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/config.json ...
	I0729 14:38:10.813089 1039440 machine.go:94] provisionDockerMachine start ...
	I0729 14:38:10.813110 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:10.813322 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:10.815607 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.815933 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.815962 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.816113 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:10.816287 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:10.816450 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:10.816623 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:10.816838 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:10.817167 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:10.817184 1039440 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:38:10.916864 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 14:38:10.916908 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetMachineName
	I0729 14:38:10.917215 1039440 buildroot.go:166] provisioning hostname "default-k8s-diff-port-751306"
	I0729 14:38:10.917249 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetMachineName
	I0729 14:38:10.917478 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:10.919961 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.920339 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.920363 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.920471 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:10.920660 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:10.920842 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:10.920991 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:10.921145 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:10.921358 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:10.921377 1039440 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-751306 && echo "default-k8s-diff-port-751306" | sudo tee /etc/hostname
	I0729 14:38:11.034826 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-751306
	
	I0729 14:38:11.034859 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.037494 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.037836 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.037870 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.038068 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:11.038274 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.038410 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.038575 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:11.038736 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:11.038971 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:11.038998 1039440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-751306' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-751306/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-751306' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:38:11.146350 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:38:11.146391 1039440 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:38:11.146449 1039440 buildroot.go:174] setting up certificates
	I0729 14:38:11.146463 1039440 provision.go:84] configureAuth start
	I0729 14:38:11.146478 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetMachineName
	I0729 14:38:11.146842 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetIP
	I0729 14:38:11.149492 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.149766 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.149796 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.149927 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.152449 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.152735 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.152785 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.152956 1039440 provision.go:143] copyHostCerts
	I0729 14:38:11.153010 1039440 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:38:11.153021 1039440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:38:11.153074 1039440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:38:11.153172 1039440 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:38:11.153180 1039440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:38:11.153198 1039440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:38:11.153253 1039440 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:38:11.153260 1039440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:38:11.153276 1039440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:38:11.153324 1039440 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-751306 san=[127.0.0.1 192.168.72.233 default-k8s-diff-port-751306 localhost minikube]
	I0729 14:38:11.489907 1039440 provision.go:177] copyRemoteCerts
	I0729 14:38:11.489990 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:38:11.490028 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.492487 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.492801 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.492832 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.492992 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:11.493220 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.493467 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:11.493611 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:38:11.574475 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:38:11.598182 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:38:11.622809 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 14:38:11.646533 1039440 provision.go:87] duration metric: took 500.054139ms to configureAuth
	I0729 14:38:11.646563 1039440 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:38:11.646742 1039440 config.go:182] Loaded profile config "default-k8s-diff-port-751306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:38:11.646822 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.649260 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.649581 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.649616 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.649729 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:11.649934 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.650088 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.650274 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:11.650436 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:11.650610 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:11.650628 1039440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:38:11.906877 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:38:11.906918 1039440 machine.go:97] duration metric: took 1.093811728s to provisionDockerMachine
	I0729 14:38:11.906936 1039440 start.go:293] postStartSetup for "default-k8s-diff-port-751306" (driver="kvm2")
	I0729 14:38:11.906951 1039440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:38:11.906977 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:11.907366 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:38:11.907407 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.910366 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.910725 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.910748 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.910913 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:11.911162 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.911323 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:11.911529 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:38:11.992133 1039440 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:38:11.996426 1039440 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:38:11.996456 1039440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:38:11.996544 1039440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:38:11.996641 1039440 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:38:11.996747 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:38:12.006165 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:12.029591 1039440 start.go:296] duration metric: took 122.613174ms for postStartSetup
	I0729 14:38:12.029643 1039440 fix.go:56] duration metric: took 18.376148792s for fixHost
	I0729 14:38:12.029670 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:12.032299 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.032667 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:12.032731 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.032901 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:12.033104 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:12.033260 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:12.033372 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:12.033510 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:12.033679 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:12.033688 1039440 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:38:12.128889 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263892.107886376
	
	I0729 14:38:12.128917 1039440 fix.go:216] guest clock: 1722263892.107886376
	I0729 14:38:12.128926 1039440 fix.go:229] Guest: 2024-07-29 14:38:12.107886376 +0000 UTC Remote: 2024-07-29 14:38:12.029648961 +0000 UTC m=+239.632909800 (delta=78.237415ms)
	I0729 14:38:12.128955 1039440 fix.go:200] guest clock delta is within tolerance: 78.237415ms
	I0729 14:38:12.128961 1039440 start.go:83] releasing machines lock for "default-k8s-diff-port-751306", held for 18.475504041s
	I0729 14:38:12.128995 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:12.129301 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetIP
	I0729 14:38:12.132025 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.132374 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:12.132401 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.132566 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:12.133087 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:12.133273 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:12.133349 1039440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:38:12.133404 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:12.133513 1039440 ssh_runner.go:195] Run: cat /version.json
	I0729 14:38:12.133534 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:12.136121 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.136149 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.136523 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:12.136577 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:12.136607 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.136624 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.136716 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:12.136793 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:12.136917 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:12.136973 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:12.137088 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:12.137165 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:12.137292 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:38:12.137232 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:38:12.233842 1039440 ssh_runner.go:195] Run: systemctl --version
	I0729 14:38:12.240082 1039440 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:38:12.388404 1039440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:38:12.395038 1039440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:38:12.395127 1039440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:38:12.416590 1039440 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:38:12.416618 1039440 start.go:495] detecting cgroup driver to use...
	I0729 14:38:12.416682 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:38:12.437863 1039440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:38:12.453458 1039440 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:38:12.453508 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:38:12.467657 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:38:12.482328 1039440 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:38:12.610786 1039440 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:38:12.774787 1039440 docker.go:233] disabling docker service ...
	I0729 14:38:12.774861 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:38:12.790091 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:38:12.803914 1039440 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:38:12.933894 1039440 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:38:13.052159 1039440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:38:13.069309 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:38:13.089959 1039440 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 14:38:13.090014 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.102668 1039440 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:38:13.102741 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.113634 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.124374 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.135488 1039440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:38:13.147171 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.159757 1039440 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.178620 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.189326 1039440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:38:13.200007 1039440 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:38:13.200067 1039440 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:38:13.213063 1039440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:38:13.226044 1039440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:13.360685 1039440 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:38:13.508473 1039440 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:38:13.508556 1039440 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:38:13.513547 1039440 start.go:563] Will wait 60s for crictl version
	I0729 14:38:13.513619 1039440 ssh_runner.go:195] Run: which crictl
	I0729 14:38:13.518528 1039440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:38:13.567103 1039440 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:38:13.567180 1039440 ssh_runner.go:195] Run: crio --version
	I0729 14:38:13.603837 1039440 ssh_runner.go:195] Run: crio --version
	I0729 14:38:13.633583 1039440 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 14:38:12.153214 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .Start
	I0729 14:38:12.153408 1039759 main.go:141] libmachine: (old-k8s-version-360866) Ensuring networks are active...
	I0729 14:38:12.154141 1039759 main.go:141] libmachine: (old-k8s-version-360866) Ensuring network default is active
	I0729 14:38:12.154590 1039759 main.go:141] libmachine: (old-k8s-version-360866) Ensuring network mk-old-k8s-version-360866 is active
	I0729 14:38:12.154970 1039759 main.go:141] libmachine: (old-k8s-version-360866) Getting domain xml...
	I0729 14:38:12.155733 1039759 main.go:141] libmachine: (old-k8s-version-360866) Creating domain...
	I0729 14:38:12.526504 1039759 main.go:141] libmachine: (old-k8s-version-360866) Waiting to get IP...
	I0729 14:38:12.527560 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:12.528068 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:12.528147 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:12.528048 1040622 retry.go:31] will retry after 240.079974ms: waiting for machine to come up
	I0729 14:38:12.769388 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:12.769881 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:12.769910 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:12.769829 1040622 retry.go:31] will retry after 271.200632ms: waiting for machine to come up
	I0729 14:38:13.042584 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:13.043069 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:13.043101 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:13.043049 1040622 retry.go:31] will retry after 464.725959ms: waiting for machine to come up
	I0729 14:38:13.509830 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:13.510400 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:13.510434 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:13.510350 1040622 retry.go:31] will retry after 416.316047ms: waiting for machine to come up
	I0729 14:38:13.042877 1039263 node_ready.go:53] node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:15.051282 1039263 node_ready.go:53] node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:13.635092 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetIP
	I0729 14:38:13.638202 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:13.638665 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:13.638691 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:13.638933 1039440 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 14:38:13.642960 1039440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:13.656098 1039440 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-751306 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-751306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.233 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:38:13.656208 1039440 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 14:38:13.656255 1039440 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:13.697398 1039440 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 14:38:13.697475 1039440 ssh_runner.go:195] Run: which lz4
	I0729 14:38:13.701632 1039440 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 14:38:13.707129 1039440 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 14:38:13.707162 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 14:38:15.218414 1039440 crio.go:462] duration metric: took 1.516807674s to copy over tarball
	I0729 14:38:15.218505 1039440 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 14:38:13.927885 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:13.928343 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:13.928373 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:13.928307 1040622 retry.go:31] will retry after 659.670364ms: waiting for machine to come up
	I0729 14:38:14.589644 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:14.590143 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:14.590172 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:14.590031 1040622 retry.go:31] will retry after 738.020335ms: waiting for machine to come up
	I0729 14:38:15.330093 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:15.330603 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:15.330633 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:15.330553 1040622 retry.go:31] will retry after 1.13067902s: waiting for machine to come up
	I0729 14:38:16.462554 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:16.463002 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:16.463031 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:16.462977 1040622 retry.go:31] will retry after 1.342785853s: waiting for machine to come up
	I0729 14:38:17.806889 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:17.807333 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:17.807365 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:17.807266 1040622 retry.go:31] will retry after 1.804812934s: waiting for machine to come up
	I0729 14:38:16.550848 1039263 node_ready.go:49] node "embed-certs-668123" has status "Ready":"True"
	I0729 14:38:16.550880 1039263 node_ready.go:38] duration metric: took 7.512808712s for node "embed-certs-668123" to be "Ready" ...
	I0729 14:38:16.550895 1039263 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:38:16.563220 1039263 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:16.570054 1039263 pod_ready.go:92] pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:16.570080 1039263 pod_ready.go:81] duration metric: took 6.832939ms for pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:16.570091 1039263 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:19.207981 1039263 pod_ready.go:102] pod "etcd-embed-certs-668123" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:17.498961 1039440 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.280415291s)
	I0729 14:38:17.498997 1039440 crio.go:469] duration metric: took 2.280548689s to extract the tarball
	I0729 14:38:17.499008 1039440 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 14:38:17.537972 1039440 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:17.583582 1039440 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 14:38:17.583609 1039440 cache_images.go:84] Images are preloaded, skipping loading
	I0729 14:38:17.583617 1039440 kubeadm.go:934] updating node { 192.168.72.233 8444 v1.30.3 crio true true} ...
	I0729 14:38:17.583719 1039440 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-751306 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-751306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:38:17.583789 1039440 ssh_runner.go:195] Run: crio config
	I0729 14:38:17.637202 1039440 cni.go:84] Creating CNI manager for ""
	I0729 14:38:17.637230 1039440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:38:17.637243 1039440 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:38:17.637272 1039440 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.233 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-751306 NodeName:default-k8s-diff-port-751306 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.233 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 14:38:17.637451 1039440 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.233
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-751306"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.233
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.233"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:38:17.637528 1039440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 14:38:17.650173 1039440 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:38:17.650259 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:38:17.661790 1039440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0729 14:38:17.680720 1039440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 14:38:17.700420 1039440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0729 14:38:17.723134 1039440 ssh_runner.go:195] Run: grep 192.168.72.233	control-plane.minikube.internal$ /etc/hosts
	I0729 14:38:17.727510 1039440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.233	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:17.741033 1039440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:17.889833 1039440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:38:17.910486 1039440 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306 for IP: 192.168.72.233
	I0729 14:38:17.910540 1039440 certs.go:194] generating shared ca certs ...
	I0729 14:38:17.910565 1039440 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:17.910763 1039440 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:38:17.910821 1039440 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:38:17.910833 1039440 certs.go:256] generating profile certs ...
	I0729 14:38:17.910941 1039440 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/client.key
	I0729 14:38:17.911003 1039440 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/apiserver.key.811a3f6d
	I0729 14:38:17.911105 1039440 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/proxy-client.key
	I0729 14:38:17.911271 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:38:17.911315 1039440 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:38:17.911329 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:38:17.911362 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:38:17.911393 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:38:17.911426 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:38:17.911478 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:17.912301 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:38:17.948102 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:38:17.984122 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:38:18.019932 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:38:18.062310 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 14:38:18.093176 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 14:38:18.124016 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:38:18.151933 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 14:38:18.179714 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:38:18.203414 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:38:18.233286 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:38:18.262871 1039440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:38:18.283064 1039440 ssh_runner.go:195] Run: openssl version
	I0729 14:38:18.289016 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:38:18.299409 1039440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:38:18.304053 1039440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:38:18.304115 1039440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:38:18.309976 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:38:18.321472 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:38:18.331916 1039440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:38:18.336822 1039440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:38:18.336881 1039440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:38:18.342762 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:38:18.353478 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:38:18.364299 1039440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:18.369024 1039440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:18.369076 1039440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:18.376534 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:38:18.387360 1039440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:38:18.392392 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:38:18.398520 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:38:18.404397 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:38:18.410922 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:38:18.417193 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:38:18.423808 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:38:18.433345 1039440 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-751306 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-751306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.233 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:38:18.433463 1039440 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:38:18.433582 1039440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:38:18.476749 1039440 cri.go:89] found id: ""
	I0729 14:38:18.476834 1039440 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:38:18.488548 1039440 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 14:38:18.488570 1039440 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 14:38:18.488628 1039440 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 14:38:18.499081 1039440 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:38:18.500064 1039440 kubeconfig.go:125] found "default-k8s-diff-port-751306" server: "https://192.168.72.233:8444"
	I0729 14:38:18.502130 1039440 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 14:38:18.511589 1039440 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.233
	I0729 14:38:18.511631 1039440 kubeadm.go:1160] stopping kube-system containers ...
	I0729 14:38:18.511646 1039440 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 14:38:18.511698 1039440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:38:18.559691 1039440 cri.go:89] found id: ""
	I0729 14:38:18.559779 1039440 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 14:38:18.576217 1039440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:38:18.586031 1039440 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:38:18.586057 1039440 kubeadm.go:157] found existing configuration files:
	
	I0729 14:38:18.586110 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 14:38:18.595032 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:38:18.595096 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:38:18.604320 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 14:38:18.613996 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:38:18.614053 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:38:18.623345 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 14:38:18.631898 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:38:18.631943 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:38:18.641303 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 14:38:18.649849 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:38:18.649907 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:38:18.659657 1039440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:38:18.668914 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:18.782351 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:19.902413 1039440 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.120025721s)
	I0729 14:38:19.902451 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:20.120455 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:20.206099 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:20.293738 1039440 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:38:20.293850 1039440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:20.794840 1039440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:21.294958 1039440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:21.313567 1039440 api_server.go:72] duration metric: took 1.019826572s to wait for apiserver process to appear ...
	I0729 14:38:21.313600 1039440 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:38:21.313625 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:21.314152 1039440 api_server.go:269] stopped: https://192.168.72.233:8444/healthz: Get "https://192.168.72.233:8444/healthz": dial tcp 192.168.72.233:8444: connect: connection refused
	I0729 14:38:21.813935 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:19.613474 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:19.613801 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:19.613830 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:19.613749 1040622 retry.go:31] will retry after 1.449593132s: waiting for machine to come up
	I0729 14:38:21.064774 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:21.065382 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:21.065405 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:21.065314 1040622 retry.go:31] will retry after 1.807508073s: waiting for machine to come up
	I0729 14:38:22.874485 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:22.874896 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:22.874925 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:22.874844 1040622 retry.go:31] will retry after 3.036719557s: waiting for machine to come up
	I0729 14:38:21.578125 1039263 pod_ready.go:92] pod "etcd-embed-certs-668123" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.578152 1039263 pod_ready.go:81] duration metric: took 5.008051755s for pod "etcd-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.578164 1039263 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.584521 1039263 pod_ready.go:92] pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.584544 1039263 pod_ready.go:81] duration metric: took 6.372252ms for pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.584558 1039263 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.590245 1039263 pod_ready.go:92] pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.590269 1039263 pod_ready.go:81] duration metric: took 5.702853ms for pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.590280 1039263 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2v79q" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.594576 1039263 pod_ready.go:92] pod "kube-proxy-2v79q" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.594602 1039263 pod_ready.go:81] duration metric: took 4.314692ms for pod "kube-proxy-2v79q" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.594614 1039263 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.787339 1039263 pod_ready.go:92] pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.787379 1039263 pod_ready.go:81] duration metric: took 192.756548ms for pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.787399 1039263 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:23.795588 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:24.561135 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:38:24.561176 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:38:24.561195 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:24.635519 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:24.635550 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:24.813755 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:24.817972 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:24.818003 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:25.314643 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:25.320059 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:25.320094 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:25.814758 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:25.820578 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:25.820613 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:26.314798 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:26.319346 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:26.319384 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:26.813907 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:26.821176 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:26.821208 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:27.314614 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:27.319335 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:27.319361 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:27.814188 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:27.819010 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 200:
	ok
	I0729 14:38:27.826057 1039440 api_server.go:141] control plane version: v1.30.3
	I0729 14:38:27.826082 1039440 api_server.go:131] duration metric: took 6.512474877s to wait for apiserver health ...
	I0729 14:38:27.826091 1039440 cni.go:84] Creating CNI manager for ""
	I0729 14:38:27.826098 1039440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:38:27.827698 1039440 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:38:25.913642 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:25.914139 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:25.914166 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:25.914099 1040622 retry.go:31] will retry after 3.839238383s: waiting for machine to come up
	I0729 14:38:26.293618 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:28.294115 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:30.296010 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:31.361688 1038758 start.go:364] duration metric: took 52.182622805s to acquireMachinesLock for "no-preload-603534"
	I0729 14:38:31.361756 1038758 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:38:31.361765 1038758 fix.go:54] fixHost starting: 
	I0729 14:38:31.362279 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:31.362319 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:31.380259 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34959
	I0729 14:38:31.380783 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:31.381320 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:38:31.381349 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:31.381649 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:31.381848 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:31.381989 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:38:31.383537 1038758 fix.go:112] recreateIfNeeded on no-preload-603534: state=Stopped err=<nil>
	I0729 14:38:31.383561 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	W0729 14:38:31.383739 1038758 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:38:31.385496 1038758 out.go:177] * Restarting existing kvm2 VM for "no-preload-603534" ...
	I0729 14:38:31.386878 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Start
	I0729 14:38:31.387071 1038758 main.go:141] libmachine: (no-preload-603534) Ensuring networks are active...
	I0729 14:38:31.387821 1038758 main.go:141] libmachine: (no-preload-603534) Ensuring network default is active
	I0729 14:38:31.388141 1038758 main.go:141] libmachine: (no-preload-603534) Ensuring network mk-no-preload-603534 is active
	I0729 14:38:31.388649 1038758 main.go:141] libmachine: (no-preload-603534) Getting domain xml...
	I0729 14:38:31.391807 1038758 main.go:141] libmachine: (no-preload-603534) Creating domain...
	I0729 14:38:27.829109 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:38:27.839810 1039440 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:38:27.858724 1039440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:38:27.868075 1039440 system_pods.go:59] 8 kube-system pods found
	I0729 14:38:27.868112 1039440 system_pods.go:61] "coredns-7db6d8ff4d-m6dlw" [7ce45b48-f04d-4167-8a6e-643b2fb3c4f0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 14:38:27.868121 1039440 system_pods.go:61] "etcd-default-k8s-diff-port-751306" [7ccadfd7-8b68-45c0-9670-af97b90d35d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 14:38:27.868128 1039440 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-751306" [5e8c8e17-28db-499c-a940-e67d92b28bfd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 14:38:27.868134 1039440 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-751306" [a2d31d58-d8d9-4070-96af-0d1af763d0b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 14:38:27.868140 1039440 system_pods.go:61] "kube-proxy-p6dv5" [c44edf0a-f608-49f2-ab53-7ffbcdf13b5e] Running
	I0729 14:38:27.868146 1039440 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-751306" [b87ee044-f43f-4aa7-94b3-4f2ad2213ce9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 14:38:27.868152 1039440 system_pods.go:61] "metrics-server-569cc877fc-gmz64" [296e883c-7394-4004-a25f-e93b4be52d46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:38:27.868156 1039440 system_pods.go:61] "storage-provisioner" [ec3b78f1-96a3-47b2-958d-82258a074634] Running
	I0729 14:38:27.868165 1039440 system_pods.go:74] duration metric: took 9.405484ms to wait for pod list to return data ...
	I0729 14:38:27.868173 1039440 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:38:27.871538 1039440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:38:27.871563 1039440 node_conditions.go:123] node cpu capacity is 2
	I0729 14:38:27.871575 1039440 node_conditions.go:105] duration metric: took 3.397306ms to run NodePressure ...
	I0729 14:38:27.871596 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:28.143890 1039440 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 14:38:28.148855 1039440 kubeadm.go:739] kubelet initialised
	I0729 14:38:28.148880 1039440 kubeadm.go:740] duration metric: took 4.952057ms waiting for restarted kubelet to initialise ...
	I0729 14:38:28.148891 1039440 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:38:28.154636 1039440 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-m6dlw" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:30.161265 1039440 pod_ready.go:102] pod "coredns-7db6d8ff4d-m6dlw" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:31.161979 1039440 pod_ready.go:92] pod "coredns-7db6d8ff4d-m6dlw" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:31.162005 1039440 pod_ready.go:81] duration metric: took 3.007344998s for pod "coredns-7db6d8ff4d-m6dlw" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:31.162015 1039440 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:29.755060 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.755512 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has current primary IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.755524 1039759 main.go:141] libmachine: (old-k8s-version-360866) Found IP for machine: 192.168.39.71
	I0729 14:38:29.755536 1039759 main.go:141] libmachine: (old-k8s-version-360866) Reserving static IP address...
	I0729 14:38:29.755975 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "old-k8s-version-360866", mac: "52:54:00:18:de:25", ip: "192.168.39.71"} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.756008 1039759 main.go:141] libmachine: (old-k8s-version-360866) Reserved static IP address: 192.168.39.71
	I0729 14:38:29.756035 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | skip adding static IP to network mk-old-k8s-version-360866 - found existing host DHCP lease matching {name: "old-k8s-version-360866", mac: "52:54:00:18:de:25", ip: "192.168.39.71"}
	I0729 14:38:29.756048 1039759 main.go:141] libmachine: (old-k8s-version-360866) Waiting for SSH to be available...
	I0729 14:38:29.756067 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | Getting to WaitForSSH function...
	I0729 14:38:29.758527 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.758899 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.758944 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.759003 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | Using SSH client type: external
	I0729 14:38:29.759024 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa (-rw-------)
	I0729 14:38:29.759058 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.71 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:38:29.759070 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | About to run SSH command:
	I0729 14:38:29.759083 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | exit 0
	I0729 14:38:29.884425 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | SSH cmd err, output: <nil>: 
	I0729 14:38:29.884833 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetConfigRaw
	I0729 14:38:29.885450 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:29.887929 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.888241 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.888294 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.888624 1039759 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/config.json ...
	I0729 14:38:29.888895 1039759 machine.go:94] provisionDockerMachine start ...
	I0729 14:38:29.888919 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:29.889221 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:29.891654 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.892013 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.892038 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.892163 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:29.892350 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.892598 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.892764 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:29.892968 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:29.893158 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:29.893169 1039759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:38:29.993529 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 14:38:29.993564 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetMachineName
	I0729 14:38:29.993859 1039759 buildroot.go:166] provisioning hostname "old-k8s-version-360866"
	I0729 14:38:29.993893 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetMachineName
	I0729 14:38:29.994074 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:29.996882 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.997279 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.997308 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.997537 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:29.997699 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.997856 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.997976 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:29.998206 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:29.998412 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:29.998429 1039759 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-360866 && echo "old-k8s-version-360866" | sudo tee /etc/hostname
	I0729 14:38:30.115298 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-360866
	
	I0729 14:38:30.115331 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.118349 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.118763 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.118793 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.119029 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:30.119203 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.119356 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.119561 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:30.119772 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:30.119976 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:30.120019 1039759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-360866' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-360866/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-360866' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:38:30.229987 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:38:30.230017 1039759 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:38:30.230059 1039759 buildroot.go:174] setting up certificates
	I0729 14:38:30.230070 1039759 provision.go:84] configureAuth start
	I0729 14:38:30.230090 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetMachineName
	I0729 14:38:30.230436 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:30.233150 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.233501 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.233533 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.233719 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.236157 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.236494 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.236534 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.236713 1039759 provision.go:143] copyHostCerts
	I0729 14:38:30.236786 1039759 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:38:30.236797 1039759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:38:30.236856 1039759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:38:30.236976 1039759 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:38:30.236986 1039759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:38:30.237006 1039759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:38:30.237071 1039759 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:38:30.237078 1039759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:38:30.237095 1039759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:38:30.237153 1039759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-360866 san=[127.0.0.1 192.168.39.71 localhost minikube old-k8s-version-360866]
	I0729 14:38:30.680859 1039759 provision.go:177] copyRemoteCerts
	I0729 14:38:30.680933 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:38:30.680970 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.683890 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.684229 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.684262 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.684430 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:30.684634 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.684822 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:30.684973 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:30.770659 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:38:30.799011 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 14:38:30.825536 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:38:30.850751 1039759 provision.go:87] duration metric: took 620.664228ms to configureAuth
	I0729 14:38:30.850795 1039759 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:38:30.850998 1039759 config.go:182] Loaded profile config "old-k8s-version-360866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 14:38:30.851072 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.853735 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.854065 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.854102 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.854197 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:30.854408 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.854559 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.854717 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:30.854961 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:30.855169 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:30.855187 1039759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:38:31.119354 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:38:31.119386 1039759 machine.go:97] duration metric: took 1.230472142s to provisionDockerMachine
	I0729 14:38:31.119401 1039759 start.go:293] postStartSetup for "old-k8s-version-360866" (driver="kvm2")
	I0729 14:38:31.119415 1039759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:38:31.119456 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.119885 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:38:31.119926 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.123196 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.123576 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.123607 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.123826 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.124053 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.124276 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.124469 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:31.208607 1039759 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:38:31.213173 1039759 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:38:31.213206 1039759 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:38:31.213268 1039759 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:38:31.213352 1039759 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:38:31.213454 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:38:31.225256 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:31.253156 1039759 start.go:296] duration metric: took 133.735669ms for postStartSetup
	I0729 14:38:31.253208 1039759 fix.go:56] duration metric: took 19.124042428s for fixHost
	I0729 14:38:31.253237 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.256005 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.256340 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.256375 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.256535 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.256732 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.256927 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.257075 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.257272 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:31.257445 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:31.257455 1039759 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:38:31.361488 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263911.340365932
	
	I0729 14:38:31.361512 1039759 fix.go:216] guest clock: 1722263911.340365932
	I0729 14:38:31.361519 1039759 fix.go:229] Guest: 2024-07-29 14:38:31.340365932 +0000 UTC Remote: 2024-07-29 14:38:31.253213714 +0000 UTC m=+217.413183116 (delta=87.152218ms)
	I0729 14:38:31.361572 1039759 fix.go:200] guest clock delta is within tolerance: 87.152218ms
	I0729 14:38:31.361583 1039759 start.go:83] releasing machines lock for "old-k8s-version-360866", held for 19.232453759s
	I0729 14:38:31.361611 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.361921 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:31.364981 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.365412 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.365441 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.365648 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.366227 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.366482 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.366583 1039759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:38:31.366644 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.366761 1039759 ssh_runner.go:195] Run: cat /version.json
	I0729 14:38:31.366797 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.369658 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.369699 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.370051 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.370081 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.370105 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.370125 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.370309 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.370325 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.370567 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.370568 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.370773 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.370809 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.370958 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:31.370957 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:31.472108 1039759 ssh_runner.go:195] Run: systemctl --version
	I0729 14:38:31.478939 1039759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:38:31.630720 1039759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:38:31.637768 1039759 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:38:31.637874 1039759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:38:31.655476 1039759 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:38:31.655504 1039759 start.go:495] detecting cgroup driver to use...
	I0729 14:38:31.655584 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:38:31.679387 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:38:31.704260 1039759 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:38:31.704318 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:38:31.727875 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:38:31.743197 1039759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:38:31.867502 1039759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:38:32.035088 1039759 docker.go:233] disabling docker service ...
	I0729 14:38:32.035169 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:38:32.050118 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:38:32.064828 1039759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:38:32.202938 1039759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:38:32.333330 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:38:32.348845 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:38:32.369848 1039759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 14:38:32.369923 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.381787 1039759 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:38:32.381893 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.394331 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.405323 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.417259 1039759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:38:32.428997 1039759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:38:32.440934 1039759 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:38:32.441003 1039759 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:38:32.454949 1039759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:38:32.466042 1039759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:32.596308 1039759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:38:32.762548 1039759 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:38:32.762632 1039759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:38:32.768336 1039759 start.go:563] Will wait 60s for crictl version
	I0729 14:38:32.768447 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:32.772850 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:38:32.829827 1039759 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:38:32.829936 1039759 ssh_runner.go:195] Run: crio --version
	I0729 14:38:32.863269 1039759 ssh_runner.go:195] Run: crio --version
	I0729 14:38:32.897768 1039759 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 14:38:32.899209 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:32.902257 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:32.902649 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:32.902680 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:32.902928 1039759 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 14:38:32.908590 1039759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:32.921952 1039759 kubeadm.go:883] updating cluster {Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:38:32.922094 1039759 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 14:38:32.922141 1039759 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:32.969932 1039759 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 14:38:32.970003 1039759 ssh_runner.go:195] Run: which lz4
	I0729 14:38:32.974564 1039759 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 14:38:32.980128 1039759 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 14:38:32.980173 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 14:38:32.795590 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:35.295541 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:31.750580 1038758 main.go:141] libmachine: (no-preload-603534) Waiting to get IP...
	I0729 14:38:31.751732 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:31.752236 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:31.752340 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:31.752236 1040763 retry.go:31] will retry after 239.008836ms: waiting for machine to come up
	I0729 14:38:31.993011 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:31.993538 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:31.993569 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:31.993481 1040763 retry.go:31] will retry after 288.863538ms: waiting for machine to come up
	I0729 14:38:32.284306 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:32.284941 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:32.284980 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:32.284867 1040763 retry.go:31] will retry after 410.903425ms: waiting for machine to come up
	I0729 14:38:32.697686 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:32.698291 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:32.698322 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:32.698227 1040763 retry.go:31] will retry after 423.090324ms: waiting for machine to come up
	I0729 14:38:33.122914 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:33.123550 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:33.123579 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:33.123500 1040763 retry.go:31] will retry after 744.030348ms: waiting for machine to come up
	I0729 14:38:33.869849 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:33.870499 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:33.870523 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:33.870456 1040763 retry.go:31] will retry after 888.516658ms: waiting for machine to come up
	I0729 14:38:34.760145 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:34.760594 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:34.760627 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:34.760534 1040763 retry.go:31] will retry after 889.371631ms: waiting for machine to come up
	I0729 14:38:35.651169 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:35.651700 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:35.651731 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:35.651636 1040763 retry.go:31] will retry after 1.200333492s: waiting for machine to come up
	I0729 14:38:33.181695 1039440 pod_ready.go:102] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:35.672201 1039440 pod_ready.go:102] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:34.707140 1039759 crio.go:462] duration metric: took 1.732619622s to copy over tarball
	I0729 14:38:34.707232 1039759 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 14:38:37.740076 1039759 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.032804006s)
	I0729 14:38:37.740105 1039759 crio.go:469] duration metric: took 3.032930405s to extract the tarball
	I0729 14:38:37.740113 1039759 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 14:38:37.786934 1039759 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:37.827451 1039759 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 14:38:37.827484 1039759 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 14:38:37.827576 1039759 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:37.827606 1039759 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:37.827624 1039759 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 14:38:37.827678 1039759 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:37.827702 1039759 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:37.827607 1039759 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:37.827683 1039759 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 14:38:37.827678 1039759 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:37.829621 1039759 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:37.829709 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:37.829714 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:37.829714 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:37.829724 1039759 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 14:38:37.829628 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:37.829808 1039759 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 14:38:37.829625 1039759 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:38.113249 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:38.373433 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:38.378382 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:38.380909 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:38.382431 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 14:38:38.391678 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:38.392565 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:38.419739 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 14:38:38.491174 1039759 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 14:38:38.491255 1039759 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:38.491320 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.570681 1039759 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 14:38:38.570784 1039759 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 14:38:38.570832 1039759 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 14:38:38.570889 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.570792 1039759 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:38.570721 1039759 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 14:38:38.570966 1039759 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:38.570977 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.570992 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.576687 1039759 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 14:38:38.576728 1039759 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:38.576769 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.587650 1039759 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 14:38:38.587699 1039759 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:38.587701 1039759 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 14:38:38.587738 1039759 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 14:38:38.587753 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.587791 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.587866 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:38.587883 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 14:38:38.587913 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:38.587948 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:38.591209 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:38.599567 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:38.610869 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 14:38:38.742939 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 14:38:38.742974 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 14:38:38.743091 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 14:38:38.743098 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 14:38:38.745789 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 14:38:38.745857 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 14:38:38.753643 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 14:38:38.753704 1039759 cache_images.go:92] duration metric: took 926.203812ms to LoadCachedImages
	W0729 14:38:38.753790 1039759 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0729 14:38:38.753804 1039759 kubeadm.go:934] updating node { 192.168.39.71 8443 v1.20.0 crio true true} ...
	I0729 14:38:38.753931 1039759 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-360866 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:38:38.753992 1039759 ssh_runner.go:195] Run: crio config
	I0729 14:38:38.802220 1039759 cni.go:84] Creating CNI manager for ""
	I0729 14:38:38.802246 1039759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:38:38.802258 1039759 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:38:38.802285 1039759 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.71 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-360866 NodeName:old-k8s-version-360866 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.71"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.71 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 14:38:38.802487 1039759 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.71
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-360866"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.71
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.71"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:38:38.802591 1039759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 14:38:38.816832 1039759 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:38:38.816934 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:38:38.827468 1039759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 14:38:38.847125 1039759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 14:38:38.865302 1039759 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 14:38:37.795799 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:40.294979 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:36.853388 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:36.853944 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:36.853979 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:36.853881 1040763 retry.go:31] will retry after 1.750535475s: waiting for machine to come up
	I0729 14:38:38.605644 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:38.606135 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:38.606185 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:38.606079 1040763 retry.go:31] will retry after 2.245294623s: waiting for machine to come up
	I0729 14:38:40.853761 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:40.854277 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:40.854311 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:40.854214 1040763 retry.go:31] will retry after 1.864975071s: waiting for machine to come up
	I0729 14:38:38.299326 1039440 pod_ready.go:102] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:39.170692 1039440 pod_ready.go:92] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:39.170720 1039440 pod_ready.go:81] duration metric: took 8.008696752s for pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:39.170735 1039440 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:39.177419 1039440 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:39.177449 1039440 pod_ready.go:81] duration metric: took 6.705958ms for pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:39.177463 1039440 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.185538 1039440 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:41.185566 1039440 pod_ready.go:81] duration metric: took 2.008093791s for pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.185580 1039440 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-p6dv5" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.193833 1039440 pod_ready.go:92] pod "kube-proxy-p6dv5" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:41.193864 1039440 pod_ready.go:81] duration metric: took 8.275486ms for pod "kube-proxy-p6dv5" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.193878 1039440 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.200931 1039440 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:41.200963 1039440 pod_ready.go:81] duration metric: took 7.075212ms for pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.200978 1039440 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:38.884267 1039759 ssh_runner.go:195] Run: grep 192.168.39.71	control-plane.minikube.internal$ /etc/hosts
	I0729 14:38:38.889206 1039759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.71	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:38.905643 1039759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:39.032065 1039759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:38:39.051892 1039759 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866 for IP: 192.168.39.71
	I0729 14:38:39.051991 1039759 certs.go:194] generating shared ca certs ...
	I0729 14:38:39.052019 1039759 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:39.052203 1039759 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:38:39.052258 1039759 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:38:39.052270 1039759 certs.go:256] generating profile certs ...
	I0729 14:38:39.091359 1039759 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/client.key
	I0729 14:38:39.091485 1039759 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.key.98c2aed0
	I0729 14:38:39.091554 1039759 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.key
	I0729 14:38:39.091718 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:38:39.091763 1039759 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:38:39.091776 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:38:39.091804 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:38:39.091835 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:38:39.091867 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:38:39.091924 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:39.092850 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:38:39.125528 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:38:39.153093 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:38:39.181324 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:38:39.235516 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 14:38:39.262599 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 14:38:39.293085 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:38:39.326318 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 14:38:39.361548 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:38:39.386876 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:38:39.412529 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:38:39.438418 1039759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:38:39.459519 1039759 ssh_runner.go:195] Run: openssl version
	I0729 14:38:39.466109 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:38:39.477941 1039759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:39.482748 1039759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:39.482820 1039759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:39.489099 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:38:39.500207 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:38:39.511513 1039759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:38:39.516125 1039759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:38:39.516183 1039759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:38:39.522297 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:38:39.533536 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:38:39.544996 1039759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:38:39.549681 1039759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:38:39.549733 1039759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:38:39.556332 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:38:39.571393 1039759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:38:39.578420 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:38:39.586316 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:38:39.593450 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:38:39.600604 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:38:39.607483 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:38:39.614692 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:38:39.621776 1039759 kubeadm.go:392] StartCluster: {Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:38:39.621893 1039759 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:38:39.621955 1039759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:38:39.673544 1039759 cri.go:89] found id: ""
	I0729 14:38:39.673634 1039759 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:38:39.687887 1039759 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 14:38:39.687912 1039759 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 14:38:39.687963 1039759 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 14:38:39.701616 1039759 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:38:39.702914 1039759 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-360866" does not appear in /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:38:39.703576 1039759 kubeconfig.go:62] /home/jenkins/minikube-integration/19338-974764/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-360866" cluster setting kubeconfig missing "old-k8s-version-360866" context setting]
	I0729 14:38:39.704951 1039759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:39.715056 1039759 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 14:38:39.728384 1039759 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.71
	I0729 14:38:39.728448 1039759 kubeadm.go:1160] stopping kube-system containers ...
	I0729 14:38:39.728466 1039759 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 14:38:39.728534 1039759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:38:39.778476 1039759 cri.go:89] found id: ""
	I0729 14:38:39.778561 1039759 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 14:38:39.800712 1039759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:38:39.813243 1039759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:38:39.813265 1039759 kubeadm.go:157] found existing configuration files:
	
	I0729 14:38:39.813323 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:38:39.824822 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:38:39.824897 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:38:39.834966 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:38:39.847660 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:38:39.847887 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:38:39.861117 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:38:39.873868 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:38:39.873936 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:38:39.884195 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:38:39.895155 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:38:39.895234 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:38:39.909138 1039759 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:38:39.920721 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:40.055932 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.173909 1039759 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.117933178s)
	I0729 14:38:41.173947 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.419684 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.550852 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.655941 1039759 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:38:41.656040 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:42.156080 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:42.656948 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:43.157127 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:43.656087 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:42.794217 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:45.293634 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:42.720182 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:42.720674 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:42.720701 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:42.720614 1040763 retry.go:31] will retry after 2.929394717s: waiting for machine to come up
	I0729 14:38:45.653508 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:45.654044 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:45.654069 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:45.653993 1040763 retry.go:31] will retry after 4.133064498s: waiting for machine to come up
	I0729 14:38:43.208287 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:45.706607 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:44.156583 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:44.657199 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:45.156268 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:45.656786 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:46.156393 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:46.656151 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:47.156507 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:47.656922 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:48.156840 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:48.656756 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:47.294322 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:49.795189 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:49.789721 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.790248 1038758 main.go:141] libmachine: (no-preload-603534) Found IP for machine: 192.168.61.116
	I0729 14:38:49.790272 1038758 main.go:141] libmachine: (no-preload-603534) Reserving static IP address...
	I0729 14:38:49.790290 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has current primary IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.790823 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "no-preload-603534", mac: "52:54:00:bf:94:45", ip: "192.168.61.116"} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:49.790860 1038758 main.go:141] libmachine: (no-preload-603534) Reserved static IP address: 192.168.61.116
	I0729 14:38:49.790883 1038758 main.go:141] libmachine: (no-preload-603534) DBG | skip adding static IP to network mk-no-preload-603534 - found existing host DHCP lease matching {name: "no-preload-603534", mac: "52:54:00:bf:94:45", ip: "192.168.61.116"}
	I0729 14:38:49.790920 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Getting to WaitForSSH function...
	I0729 14:38:49.790937 1038758 main.go:141] libmachine: (no-preload-603534) Waiting for SSH to be available...
	I0729 14:38:49.793243 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.793646 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:49.793679 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.793820 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Using SSH client type: external
	I0729 14:38:49.793850 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa (-rw-------)
	I0729 14:38:49.793884 1038758 main.go:141] libmachine: (no-preload-603534) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:38:49.793899 1038758 main.go:141] libmachine: (no-preload-603534) DBG | About to run SSH command:
	I0729 14:38:49.793961 1038758 main.go:141] libmachine: (no-preload-603534) DBG | exit 0
	I0729 14:38:49.924827 1038758 main.go:141] libmachine: (no-preload-603534) DBG | SSH cmd err, output: <nil>: 
	I0729 14:38:49.925188 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetConfigRaw
	I0729 14:38:49.925835 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetIP
	I0729 14:38:49.928349 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.928799 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:49.928830 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.929091 1038758 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/config.json ...
	I0729 14:38:49.929313 1038758 machine.go:94] provisionDockerMachine start ...
	I0729 14:38:49.929334 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:49.929556 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:49.932040 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.932431 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:49.932473 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.932629 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:49.932798 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:49.932930 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:49.933033 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:49.933142 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:49.933313 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:49.933324 1038758 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:38:50.049016 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 14:38:50.049059 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:38:50.049328 1038758 buildroot.go:166] provisioning hostname "no-preload-603534"
	I0729 14:38:50.049354 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:38:50.049566 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.052138 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.052532 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.052561 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.052736 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.052918 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.053093 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.053269 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.053462 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:50.053641 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:50.053653 1038758 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-603534 && echo "no-preload-603534" | sudo tee /etc/hostname
	I0729 14:38:50.189302 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-603534
	
	I0729 14:38:50.189341 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.192559 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.192949 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.192974 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.193248 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.193476 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.193689 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.193870 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.194082 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:50.194305 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:50.194329 1038758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-603534' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-603534/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-603534' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:38:50.322506 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:38:50.322540 1038758 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:38:50.322564 1038758 buildroot.go:174] setting up certificates
	I0729 14:38:50.322577 1038758 provision.go:84] configureAuth start
	I0729 14:38:50.322589 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:38:50.322938 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetIP
	I0729 14:38:50.325594 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.325957 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.325994 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.326139 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.328455 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.328803 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.328828 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.328950 1038758 provision.go:143] copyHostCerts
	I0729 14:38:50.329015 1038758 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:38:50.329025 1038758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:38:50.329078 1038758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:38:50.329165 1038758 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:38:50.329173 1038758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:38:50.329192 1038758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:38:50.329243 1038758 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:38:50.329249 1038758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:38:50.329264 1038758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:38:50.329310 1038758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.no-preload-603534 san=[127.0.0.1 192.168.61.116 localhost minikube no-preload-603534]
	I0729 14:38:50.447706 1038758 provision.go:177] copyRemoteCerts
	I0729 14:38:50.447777 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:38:50.447810 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.450714 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.451106 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.451125 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.451444 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.451679 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.451855 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.451975 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:38:50.539025 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:38:50.567887 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:38:50.594581 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 14:38:50.619475 1038758 provision.go:87] duration metric: took 296.880769ms to configureAuth
	I0729 14:38:50.619509 1038758 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:38:50.619708 1038758 config.go:182] Loaded profile config "no-preload-603534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 14:38:50.619797 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.622753 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.623121 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.623151 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.623331 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.623519 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.623684 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.623813 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.623971 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:50.624151 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:50.624168 1038758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:38:50.895618 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:38:50.895649 1038758 machine.go:97] duration metric: took 966.320375ms to provisionDockerMachine
	I0729 14:38:50.895662 1038758 start.go:293] postStartSetup for "no-preload-603534" (driver="kvm2")
	I0729 14:38:50.895684 1038758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:38:50.895717 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:50.896084 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:38:50.896112 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.899586 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.899998 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.900031 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.900168 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.900424 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.900622 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.900799 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:38:50.987195 1038758 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:38:50.991924 1038758 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:38:50.991952 1038758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:38:50.992025 1038758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:38:50.992111 1038758 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:38:50.992208 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:38:51.002048 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:51.029714 1038758 start.go:296] duration metric: took 134.037621ms for postStartSetup
	I0729 14:38:51.029758 1038758 fix.go:56] duration metric: took 19.66799406s for fixHost
	I0729 14:38:51.029782 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:51.032495 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.032819 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:51.032844 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.033049 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:51.033236 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:51.033377 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:51.033587 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:51.033767 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:51.034007 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:51.034021 1038758 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:38:51.149481 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263931.130931233
	
	I0729 14:38:51.149510 1038758 fix.go:216] guest clock: 1722263931.130931233
	I0729 14:38:51.149520 1038758 fix.go:229] Guest: 2024-07-29 14:38:51.130931233 +0000 UTC Remote: 2024-07-29 14:38:51.029761931 +0000 UTC m=+354.409484230 (delta=101.169302ms)
	I0729 14:38:51.149575 1038758 fix.go:200] guest clock delta is within tolerance: 101.169302ms
	I0729 14:38:51.149583 1038758 start.go:83] releasing machines lock for "no-preload-603534", held for 19.787859214s
	I0729 14:38:51.149617 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:51.149923 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetIP
	I0729 14:38:51.152671 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.153054 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:51.153081 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.153298 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:51.153898 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:51.154092 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:51.154192 1038758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:38:51.154245 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:51.154349 1038758 ssh_runner.go:195] Run: cat /version.json
	I0729 14:38:51.154378 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:51.157173 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.157200 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.157560 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:51.157592 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.157635 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:51.157654 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.157955 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:51.157976 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:51.158169 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:51.158195 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:51.158370 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:51.158381 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:51.158565 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:38:51.158572 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:38:51.260806 1038758 ssh_runner.go:195] Run: systemctl --version
	I0729 14:38:51.266847 1038758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:38:51.412637 1038758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:38:51.418879 1038758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:38:51.418954 1038758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:38:51.435946 1038758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:38:51.435978 1038758 start.go:495] detecting cgroup driver to use...
	I0729 14:38:51.436061 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:38:51.457517 1038758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:38:51.472718 1038758 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:38:51.472811 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:38:51.487062 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:38:51.501410 1038758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:38:51.617292 1038758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:38:47.708063 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:49.708506 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:52.209337 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:51.764302 1038758 docker.go:233] disabling docker service ...
	I0729 14:38:51.764386 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:38:51.779137 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:38:51.794372 1038758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:38:51.930402 1038758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:38:52.062691 1038758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:38:52.076796 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:38:52.095912 1038758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 14:38:52.095994 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.107507 1038758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:38:52.107588 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.119470 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.131252 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.141672 1038758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:38:52.152086 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.163682 1038758 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.189614 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.200279 1038758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:38:52.211878 1038758 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:38:52.211943 1038758 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:38:52.224909 1038758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:38:52.234312 1038758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:52.357370 1038758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:38:52.492520 1038758 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:38:52.492622 1038758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:38:52.497537 1038758 start.go:563] Will wait 60s for crictl version
	I0729 14:38:52.497595 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:52.501292 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:38:52.544320 1038758 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:38:52.544428 1038758 ssh_runner.go:195] Run: crio --version
	I0729 14:38:52.575452 1038758 ssh_runner.go:195] Run: crio --version
	I0729 14:38:52.605920 1038758 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 14:38:49.156539 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:49.656397 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:50.156909 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:50.656968 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:51.156321 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:51.656183 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:52.157099 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:52.656725 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:53.157009 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:53.656787 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:51.796331 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:53.799083 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:52.607410 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetIP
	I0729 14:38:52.610017 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:52.610296 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:52.610330 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:52.610553 1038758 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 14:38:52.614659 1038758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:52.626967 1038758 kubeadm.go:883] updating cluster {Name:no-preload-603534 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-603534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:38:52.627087 1038758 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 14:38:52.627124 1038758 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:52.662824 1038758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 14:38:52.662852 1038758 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 14:38:52.662901 1038758 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:52.662968 1038758 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:52.663040 1038758 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 14:38:52.663043 1038758 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:52.663066 1038758 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:52.662987 1038758 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:52.662987 1038758 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:52.663017 1038758 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:52.664360 1038758 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 14:38:52.664947 1038758 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:52.664965 1038758 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:52.664954 1038758 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:52.665015 1038758 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:52.665023 1038758 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:52.665351 1038758 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:52.665423 1038758 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:52.829143 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:52.829158 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:52.829541 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:52.851797 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:52.866728 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 14:38:52.884604 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:52.893636 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:52.946087 1038758 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 14:38:52.946134 1038758 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 14:38:52.946160 1038758 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:52.946170 1038758 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:52.946173 1038758 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 14:38:52.946192 1038758 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:52.946216 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:52.946221 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:52.946217 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:52.954361 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:53.001715 1038758 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 14:38:53.001766 1038758 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:53.001826 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:53.106651 1038758 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 14:38:53.106713 1038758 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:53.106770 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:53.106838 1038758 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 14:38:53.106883 1038758 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:53.106921 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:53.106927 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:53.106962 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:53.107012 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:53.107038 1038758 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 14:38:53.107067 1038758 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:53.107079 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:53.107092 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:53.131562 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:53.212076 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:53.212199 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 14:38:53.212272 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 14:38:53.214338 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 14:38:53.214430 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 14:38:53.216771 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:53.216941 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 14:38:53.217037 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 14:38:53.220214 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 14:38:53.220306 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 14:38:53.272021 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 14:38:53.272140 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 14:38:53.275939 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 14:38:53.275988 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 14:38:53.276008 1038758 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 14:38:53.276009 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 14:38:53.276029 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 14:38:53.276054 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 14:38:53.301528 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 14:38:53.301578 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 14:38:53.301600 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 14:38:53.301647 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 14:38:53.301759 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 14:38:55.357295 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.08120738s)
	I0729 14:38:55.357329 1038758 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.081270007s)
	I0729 14:38:55.357371 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 14:38:55.357338 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 14:38:55.357384 1038758 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.055605102s)
	I0729 14:38:55.357406 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 14:38:55.357407 1038758 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 14:38:55.357464 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 14:38:54.708330 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:57.207468 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:54.156921 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:54.656957 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:55.156201 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:55.656783 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:56.156180 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:56.656984 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:57.156610 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:57.656127 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:58.156785 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:58.656192 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:56.295143 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:58.795511 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:57.217512 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.860011805s)
	I0729 14:38:57.217539 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 14:38:57.217570 1038758 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 14:38:57.217634 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 14:38:59.187398 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.969733063s)
	I0729 14:38:59.187443 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 14:38:59.187482 1038758 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 14:38:59.187562 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 14:39:01.138568 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.950970137s)
	I0729 14:39:01.138617 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 14:39:01.138654 1038758 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 14:39:01.138740 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 14:38:59.207657 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:01.208795 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:59.156740 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:59.656223 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:00.156726 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:00.656593 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:01.156115 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:01.656364 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:02.157069 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:02.656491 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:03.156938 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:03.656898 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:01.293858 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:03.484613 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:05.793953 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:04.231830 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.093043665s)
	I0729 14:39:04.231866 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 14:39:04.231897 1038758 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 14:39:04.231963 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 14:39:05.182458 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 14:39:05.182512 1038758 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 14:39:05.182566 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 14:39:03.209198 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:05.707557 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:04.157177 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:04.656505 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:05.156530 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:05.656389 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:06.156606 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:06.657121 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:07.157048 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:07.656497 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:08.156327 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:08.656868 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:07.794522 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:09.794886 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:07.253615 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.070972791s)
	I0729 14:39:07.253665 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 14:39:07.253700 1038758 cache_images.go:123] Successfully loaded all cached images
	I0729 14:39:07.253707 1038758 cache_images.go:92] duration metric: took 14.590842072s to LoadCachedImages
	I0729 14:39:07.253720 1038758 kubeadm.go:934] updating node { 192.168.61.116 8443 v1.31.0-beta.0 crio true true} ...
	I0729 14:39:07.253899 1038758 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-603534 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-603534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:39:07.253980 1038758 ssh_runner.go:195] Run: crio config
	I0729 14:39:07.309694 1038758 cni.go:84] Creating CNI manager for ""
	I0729 14:39:07.309720 1038758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:39:07.309731 1038758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:39:07.309754 1038758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-603534 NodeName:no-preload-603534 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 14:39:07.309916 1038758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-603534"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:39:07.309985 1038758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 14:39:07.321876 1038758 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:39:07.321967 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:39:07.333057 1038758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0729 14:39:07.350193 1038758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 14:39:07.367171 1038758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0729 14:39:07.384123 1038758 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0729 14:39:07.387896 1038758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:39:07.400317 1038758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:39:07.525822 1038758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:39:07.545142 1038758 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534 for IP: 192.168.61.116
	I0729 14:39:07.545167 1038758 certs.go:194] generating shared ca certs ...
	I0729 14:39:07.545189 1038758 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:39:07.545389 1038758 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:39:07.545448 1038758 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:39:07.545463 1038758 certs.go:256] generating profile certs ...
	I0729 14:39:07.545582 1038758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/client.key
	I0729 14:39:07.545658 1038758 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/apiserver.key.117a155a
	I0729 14:39:07.545725 1038758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/proxy-client.key
	I0729 14:39:07.545881 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:39:07.545913 1038758 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:39:07.545922 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:39:07.545945 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:39:07.545969 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:39:07.545990 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:39:07.546027 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:39:07.546679 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:39:07.582488 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:39:07.617327 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:39:07.647627 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:39:07.685799 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 14:39:07.720365 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 14:39:07.744627 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:39:07.771409 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 14:39:07.797570 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:39:07.820888 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:39:07.843714 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:39:07.867365 1038758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:39:07.884283 1038758 ssh_runner.go:195] Run: openssl version
	I0729 14:39:07.890379 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:39:07.901894 1038758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:39:07.906431 1038758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:39:07.906487 1038758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:39:07.912284 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:39:07.923393 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:39:07.934119 1038758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:39:07.938563 1038758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:39:07.938620 1038758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:39:07.944115 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:39:07.954815 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:39:07.965864 1038758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:39:07.970695 1038758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:39:07.970761 1038758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:39:07.977340 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:39:07.990416 1038758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:39:07.995446 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:39:08.001615 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:39:08.007621 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:39:08.013648 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:39:08.019525 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:39:08.025505 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:39:08.031480 1038758 kubeadm.go:392] StartCluster: {Name:no-preload-603534 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-603534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:39:08.031592 1038758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:39:08.031657 1038758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:39:08.077847 1038758 cri.go:89] found id: ""
	I0729 14:39:08.077936 1038758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:39:08.088616 1038758 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 14:39:08.088639 1038758 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 14:39:08.088704 1038758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 14:39:08.101150 1038758 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:39:08.102305 1038758 kubeconfig.go:125] found "no-preload-603534" server: "https://192.168.61.116:8443"
	I0729 14:39:08.105529 1038758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 14:39:08.117031 1038758 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.116
	I0729 14:39:08.117070 1038758 kubeadm.go:1160] stopping kube-system containers ...
	I0729 14:39:08.117085 1038758 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 14:39:08.117148 1038758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:39:08.171626 1038758 cri.go:89] found id: ""
	I0729 14:39:08.171706 1038758 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 14:39:08.190491 1038758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:39:08.200776 1038758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:39:08.200806 1038758 kubeadm.go:157] found existing configuration files:
	
	I0729 14:39:08.200873 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:39:08.211430 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:39:08.211483 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:39:08.221865 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:39:08.231668 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:39:08.231719 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:39:08.242027 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:39:08.251585 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:39:08.251639 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:39:08.261521 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:39:08.271210 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:39:08.271284 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:39:08.281112 1038758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:39:08.290948 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:08.417397 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:09.400064 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:09.590859 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:09.670134 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:09.781580 1038758 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:39:09.781719 1038758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.282592 1038758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.781923 1038758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.843114 1038758 api_server.go:72] duration metric: took 1.061535691s to wait for apiserver process to appear ...
	I0729 14:39:10.843151 1038758 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:39:10.843182 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:10.843715 1038758 api_server.go:269] stopped: https://192.168.61.116:8443/healthz: Get "https://192.168.61.116:8443/healthz": dial tcp 192.168.61.116:8443: connect: connection refused
	I0729 14:39:11.343301 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:08.207563 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:10.208276 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:09.156858 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:09.656910 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.156126 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.657149 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:11.156223 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:11.657184 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:12.156454 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:12.656896 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:13.156693 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:13.656971 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:13.993249 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:39:13.993278 1038758 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:39:13.993298 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:14.011972 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:39:14.012012 1038758 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:39:14.343228 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:14.347946 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:39:14.347983 1038758 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:39:14.844144 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:14.858278 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:39:14.858311 1038758 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:39:15.343885 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:15.350223 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0729 14:39:15.360468 1038758 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 14:39:15.360513 1038758 api_server.go:131] duration metric: took 4.517353977s to wait for apiserver health ...
	I0729 14:39:15.360524 1038758 cni.go:84] Creating CNI manager for ""
	I0729 14:39:15.360532 1038758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:39:15.362679 1038758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:39:12.293516 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:14.294107 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:15.364237 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:39:15.379974 1038758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:39:15.422444 1038758 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:39:15.441468 1038758 system_pods.go:59] 8 kube-system pods found
	I0729 14:39:15.441512 1038758 system_pods.go:61] "coredns-5cfdc65f69-tjdx4" [986cdef3-de61-4c0f-bc75-fae4f6b44a37] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 14:39:15.441525 1038758 system_pods.go:61] "etcd-no-preload-603534" [e27f5761-5322-4d88-b90a-bcff42c9dfa5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 14:39:15.441537 1038758 system_pods.go:61] "kube-apiserver-no-preload-603534" [33ed9f7c-1240-40cf-b51d-125b3473bfd5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 14:39:15.441547 1038758 system_pods.go:61] "kube-controller-manager-no-preload-603534" [f79520a2-380e-4d8a-b1ff-78c6cd3d3741] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 14:39:15.441559 1038758 system_pods.go:61] "kube-proxy-ftpk5" [a5471ad7-5fd3-49b7-8631-4ca2962761d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 14:39:15.441568 1038758 system_pods.go:61] "kube-scheduler-no-preload-603534" [860e262c-f036-4181-a0ad-8ba0058a47d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 14:39:15.441580 1038758 system_pods.go:61] "metrics-server-78fcd8795b-59sbc" [8af92987-ce8d-434f-93de-16d0adc35fa5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:39:15.441598 1038758 system_pods.go:61] "storage-provisioner" [579d0cc8-e30e-4ee3-ac55-c2f0bc5871e1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 14:39:15.441606 1038758 system_pods.go:74] duration metric: took 19.133029ms to wait for pod list to return data ...
	I0729 14:39:15.441618 1038758 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:39:15.445594 1038758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:39:15.445630 1038758 node_conditions.go:123] node cpu capacity is 2
	I0729 14:39:15.445646 1038758 node_conditions.go:105] duration metric: took 4.019018ms to run NodePressure ...
	I0729 14:39:15.445678 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:15.743404 1038758 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 14:39:15.751028 1038758 kubeadm.go:739] kubelet initialised
	I0729 14:39:15.751050 1038758 kubeadm.go:740] duration metric: took 7.619795ms waiting for restarted kubelet to initialise ...
	I0729 14:39:15.751059 1038758 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:39:15.759157 1038758 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:12.708704 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:15.208434 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:14.157127 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:14.656806 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:15.156564 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:15.656881 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:16.156239 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:16.656440 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:17.157130 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:17.656240 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:18.156161 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:18.656808 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:16.294741 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:18.797700 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:17.768132 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:20.265670 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:17.709929 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:20.206710 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:22.207809 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:19.156721 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:19.656766 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:20.156352 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:20.656788 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:21.156179 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:21.656213 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:22.156475 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:22.656275 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:23.156592 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:23.656979 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:21.294265 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:23.294366 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:25.794648 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:22.265947 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:24.266644 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:24.708214 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:27.208824 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:24.156798 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:24.656473 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:25.156551 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:25.656356 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:26.156887 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:26.656332 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:27.156494 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:27.656839 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:28.156763 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:28.656512 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:27.795415 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:30.293460 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:26.766260 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:29.265817 1038758 pod_ready.go:92] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.265851 1038758 pod_ready.go:81] duration metric: took 13.506661461s for pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.265865 1038758 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.276021 1038758 pod_ready.go:92] pod "etcd-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.276043 1038758 pod_ready.go:81] duration metric: took 10.172055ms for pod "etcd-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.276052 1038758 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.280197 1038758 pod_ready.go:92] pod "kube-apiserver-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.280215 1038758 pod_ready.go:81] duration metric: took 4.156785ms for pod "kube-apiserver-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.280223 1038758 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.284076 1038758 pod_ready.go:92] pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.284096 1038758 pod_ready.go:81] duration metric: took 3.865927ms for pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.284122 1038758 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ftpk5" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.288280 1038758 pod_ready.go:92] pod "kube-proxy-ftpk5" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.288297 1038758 pod_ready.go:81] duration metric: took 4.16843ms for pod "kube-proxy-ftpk5" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.288305 1038758 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.666771 1038758 pod_ready.go:92] pod "kube-scheduler-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.666802 1038758 pod_ready.go:81] duration metric: took 378.49001ms for pod "kube-scheduler-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.666813 1038758 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.706596 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:32.208095 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:29.156096 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:29.656289 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:30.156693 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:30.656795 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:31.156756 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:31.656888 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:32.156563 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:32.656795 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:33.156271 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:33.656562 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:32.293988 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:34.793456 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:31.674203 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:34.174002 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:34.708005 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:37.206789 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:34.157046 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:34.656398 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:35.156198 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:35.656763 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:36.156542 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:36.656994 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:37.156808 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:37.657093 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:38.156119 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:38.657017 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:36.793771 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:39.294267 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:36.676693 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:39.172713 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:41.174348 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:39.207584 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:41.707645 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:39.156909 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:39.656176 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:40.156455 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:40.656609 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:41.156891 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:41.656327 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:41.656423 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:41.701839 1039759 cri.go:89] found id: ""
	I0729 14:39:41.701863 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.701872 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:41.701878 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:41.701942 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:41.738281 1039759 cri.go:89] found id: ""
	I0729 14:39:41.738308 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.738315 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:41.738321 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:41.738377 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:41.771954 1039759 cri.go:89] found id: ""
	I0729 14:39:41.771981 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.771990 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:41.771996 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:41.772060 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:41.806157 1039759 cri.go:89] found id: ""
	I0729 14:39:41.806182 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.806190 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:41.806196 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:41.806249 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:41.841284 1039759 cri.go:89] found id: ""
	I0729 14:39:41.841312 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.841319 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:41.841325 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:41.841394 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:41.875864 1039759 cri.go:89] found id: ""
	I0729 14:39:41.875893 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.875902 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:41.875908 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:41.875962 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:41.909797 1039759 cri.go:89] found id: ""
	I0729 14:39:41.909824 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.909833 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:41.909840 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:41.909892 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:41.943886 1039759 cri.go:89] found id: ""
	I0729 14:39:41.943912 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.943920 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:41.943929 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:41.943944 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:41.983224 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:41.983254 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:42.035264 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:42.035303 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:42.049343 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:42.049369 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:42.171904 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:42.171924 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:42.171947 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:41.295209 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:43.795811 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:43.673853 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:45.674302 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:44.207555 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:46.707384 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:44.738629 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:44.753497 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:44.753582 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:44.793256 1039759 cri.go:89] found id: ""
	I0729 14:39:44.793283 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.793291 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:44.793298 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:44.793363 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:44.833698 1039759 cri.go:89] found id: ""
	I0729 14:39:44.833726 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.833733 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:44.833739 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:44.833792 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:44.876328 1039759 cri.go:89] found id: ""
	I0729 14:39:44.876357 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.876366 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:44.876372 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:44.876471 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:44.918091 1039759 cri.go:89] found id: ""
	I0729 14:39:44.918121 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.918132 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:44.918140 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:44.918210 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:44.965105 1039759 cri.go:89] found id: ""
	I0729 14:39:44.965137 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.965149 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:44.965157 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:44.965228 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:45.014119 1039759 cri.go:89] found id: ""
	I0729 14:39:45.014150 1039759 logs.go:276] 0 containers: []
	W0729 14:39:45.014162 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:45.014170 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:45.014243 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:45.059826 1039759 cri.go:89] found id: ""
	I0729 14:39:45.059858 1039759 logs.go:276] 0 containers: []
	W0729 14:39:45.059870 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:45.059879 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:45.059946 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:45.099666 1039759 cri.go:89] found id: ""
	I0729 14:39:45.099706 1039759 logs.go:276] 0 containers: []
	W0729 14:39:45.099717 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:45.099730 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:45.099748 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:45.144219 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:45.144263 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:45.199719 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:45.199754 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:45.214225 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:45.214260 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:45.289090 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:45.289119 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:45.289138 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:47.860797 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:47.874523 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:47.874606 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:47.913570 1039759 cri.go:89] found id: ""
	I0729 14:39:47.913599 1039759 logs.go:276] 0 containers: []
	W0729 14:39:47.913608 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:47.913615 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:47.913674 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:47.946699 1039759 cri.go:89] found id: ""
	I0729 14:39:47.946725 1039759 logs.go:276] 0 containers: []
	W0729 14:39:47.946734 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:47.946740 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:47.946792 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:47.986492 1039759 cri.go:89] found id: ""
	I0729 14:39:47.986533 1039759 logs.go:276] 0 containers: []
	W0729 14:39:47.986546 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:47.986554 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:47.986635 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:48.027232 1039759 cri.go:89] found id: ""
	I0729 14:39:48.027260 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.027268 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:48.027274 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:48.027327 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:48.065119 1039759 cri.go:89] found id: ""
	I0729 14:39:48.065145 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.065153 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:48.065159 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:48.065217 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:48.105087 1039759 cri.go:89] found id: ""
	I0729 14:39:48.105119 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.105128 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:48.105134 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:48.105199 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:48.144684 1039759 cri.go:89] found id: ""
	I0729 14:39:48.144718 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.144730 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:48.144737 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:48.144816 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:48.180350 1039759 cri.go:89] found id: ""
	I0729 14:39:48.180380 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.180389 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:48.180401 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:48.180436 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:48.259859 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:48.259905 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:48.301132 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:48.301163 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:48.352753 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:48.352795 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:48.365936 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:48.365969 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:48.434634 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:46.293123 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:48.293674 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:50.294113 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:47.674411 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:50.173727 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:48.707887 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:51.207444 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:50.934903 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:50.948702 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:50.948787 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:50.982889 1039759 cri.go:89] found id: ""
	I0729 14:39:50.982917 1039759 logs.go:276] 0 containers: []
	W0729 14:39:50.982927 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:50.982933 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:50.983010 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:51.020679 1039759 cri.go:89] found id: ""
	I0729 14:39:51.020713 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.020726 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:51.020734 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:51.020818 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:51.055114 1039759 cri.go:89] found id: ""
	I0729 14:39:51.055147 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.055158 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:51.055166 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:51.055237 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:51.089053 1039759 cri.go:89] found id: ""
	I0729 14:39:51.089087 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.089099 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:51.089108 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:51.089184 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:51.125823 1039759 cri.go:89] found id: ""
	I0729 14:39:51.125851 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.125861 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:51.125868 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:51.125938 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:51.162645 1039759 cri.go:89] found id: ""
	I0729 14:39:51.162683 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.162694 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:51.162702 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:51.162767 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:51.196820 1039759 cri.go:89] found id: ""
	I0729 14:39:51.196849 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.196857 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:51.196864 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:51.196937 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:51.236442 1039759 cri.go:89] found id: ""
	I0729 14:39:51.236469 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.236479 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:51.236491 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:51.236506 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:51.317077 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:51.317101 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:51.317119 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:51.398118 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:51.398172 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:51.437096 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:51.437128 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:51.488949 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:51.488992 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:52.795544 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:55.294184 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:52.174241 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:54.672702 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:53.207592 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:55.706971 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:54.004536 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:54.019400 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:54.019480 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:54.054592 1039759 cri.go:89] found id: ""
	I0729 14:39:54.054626 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.054639 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:54.054647 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:54.054712 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:54.090184 1039759 cri.go:89] found id: ""
	I0729 14:39:54.090217 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.090227 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:54.090234 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:54.090304 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:54.129977 1039759 cri.go:89] found id: ""
	I0729 14:39:54.130007 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.130016 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:54.130022 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:54.130081 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:54.170940 1039759 cri.go:89] found id: ""
	I0729 14:39:54.170970 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.170980 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:54.170988 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:54.171053 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:54.206197 1039759 cri.go:89] found id: ""
	I0729 14:39:54.206224 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.206244 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:54.206251 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:54.206340 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:54.246929 1039759 cri.go:89] found id: ""
	I0729 14:39:54.246963 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.246973 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:54.246980 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:54.247049 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:54.286202 1039759 cri.go:89] found id: ""
	I0729 14:39:54.286231 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.286240 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:54.286245 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:54.286315 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:54.321784 1039759 cri.go:89] found id: ""
	I0729 14:39:54.321815 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.321824 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:54.321837 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:54.321860 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:54.363159 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:54.363187 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:54.416151 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:54.416194 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:54.429824 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:54.429852 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:54.506348 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:54.506373 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:54.506390 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:57.094810 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:57.108163 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:57.108238 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:57.143556 1039759 cri.go:89] found id: ""
	I0729 14:39:57.143588 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.143601 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:57.143608 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:57.143678 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:57.177664 1039759 cri.go:89] found id: ""
	I0729 14:39:57.177695 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.177706 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:57.177714 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:57.177801 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:57.212046 1039759 cri.go:89] found id: ""
	I0729 14:39:57.212106 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.212231 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:57.212249 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:57.212323 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:57.252518 1039759 cri.go:89] found id: ""
	I0729 14:39:57.252549 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.252558 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:57.252564 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:57.252677 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:57.287045 1039759 cri.go:89] found id: ""
	I0729 14:39:57.287069 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.287077 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:57.287084 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:57.287141 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:57.324553 1039759 cri.go:89] found id: ""
	I0729 14:39:57.324588 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.324599 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:57.324607 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:57.324684 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:57.358761 1039759 cri.go:89] found id: ""
	I0729 14:39:57.358801 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.358811 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:57.358819 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:57.358898 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:57.402023 1039759 cri.go:89] found id: ""
	I0729 14:39:57.402051 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.402062 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:57.402074 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:57.402094 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:57.445600 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:57.445632 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:57.501876 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:57.501911 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:57.518264 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:57.518299 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:57.593247 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:57.593274 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:57.593292 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:57.793782 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:59.794287 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:56.673243 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:59.174416 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:57.707618 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:00.208574 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:00.181109 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:00.194553 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:00.194641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:00.237761 1039759 cri.go:89] found id: ""
	I0729 14:40:00.237801 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.237814 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:00.237829 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:00.237901 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:00.273113 1039759 cri.go:89] found id: ""
	I0729 14:40:00.273145 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.273157 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:00.273166 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:00.273232 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:00.312136 1039759 cri.go:89] found id: ""
	I0729 14:40:00.312169 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.312176 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:00.312182 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:00.312249 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:00.349610 1039759 cri.go:89] found id: ""
	I0729 14:40:00.349642 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.349654 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:00.349662 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:00.349792 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:00.384121 1039759 cri.go:89] found id: ""
	I0729 14:40:00.384148 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.384157 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:00.384163 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:00.384233 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:00.419684 1039759 cri.go:89] found id: ""
	I0729 14:40:00.419720 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.419731 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:00.419739 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:00.419809 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:00.453905 1039759 cri.go:89] found id: ""
	I0729 14:40:00.453937 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.453945 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:00.453951 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:00.454023 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:00.490124 1039759 cri.go:89] found id: ""
	I0729 14:40:00.490149 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.490158 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:00.490168 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:00.490185 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:00.562684 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:00.562713 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:00.562735 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:00.643860 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:00.643899 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:00.683247 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:00.683276 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:00.734131 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:00.734166 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:03.249468 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:03.262712 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:03.262788 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:03.300774 1039759 cri.go:89] found id: ""
	I0729 14:40:03.300801 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.300816 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:03.300823 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:03.300891 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:03.335367 1039759 cri.go:89] found id: ""
	I0729 14:40:03.335398 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.335409 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:03.335419 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:03.335488 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:03.375683 1039759 cri.go:89] found id: ""
	I0729 14:40:03.375717 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.375728 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:03.375734 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:03.375814 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:03.409593 1039759 cri.go:89] found id: ""
	I0729 14:40:03.409623 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.409631 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:03.409637 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:03.409711 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:03.444531 1039759 cri.go:89] found id: ""
	I0729 14:40:03.444566 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.444578 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:03.444585 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:03.444655 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:03.479446 1039759 cri.go:89] found id: ""
	I0729 14:40:03.479476 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.479487 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:03.479495 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:03.479563 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:03.517277 1039759 cri.go:89] found id: ""
	I0729 14:40:03.517311 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.517321 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:03.517329 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:03.517396 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:03.556343 1039759 cri.go:89] found id: ""
	I0729 14:40:03.556373 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.556381 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:03.556391 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:03.556422 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:03.610156 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:03.610196 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:03.624776 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:03.624812 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:03.696584 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:03.696609 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:03.696625 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:03.775066 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:03.775109 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:01.794683 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:03.795112 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:01.673980 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:04.173900 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:02.706731 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:04.707655 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:07.207027 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:06.319720 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:06.332865 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:06.332937 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:06.366576 1039759 cri.go:89] found id: ""
	I0729 14:40:06.366608 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.366631 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:06.366639 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:06.366730 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:06.402710 1039759 cri.go:89] found id: ""
	I0729 14:40:06.402735 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.402743 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:06.402748 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:06.402804 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:06.439048 1039759 cri.go:89] found id: ""
	I0729 14:40:06.439095 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.439116 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:06.439125 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:06.439196 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:06.473407 1039759 cri.go:89] found id: ""
	I0729 14:40:06.473443 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.473456 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:06.473464 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:06.473544 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:06.507278 1039759 cri.go:89] found id: ""
	I0729 14:40:06.507309 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.507319 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:06.507327 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:06.507396 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:06.541573 1039759 cri.go:89] found id: ""
	I0729 14:40:06.541600 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.541608 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:06.541617 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:06.541679 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:06.587666 1039759 cri.go:89] found id: ""
	I0729 14:40:06.587697 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.587707 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:06.587714 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:06.587773 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:06.622415 1039759 cri.go:89] found id: ""
	I0729 14:40:06.622448 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.622459 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:06.622478 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:06.622497 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:06.659987 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:06.660019 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:06.716303 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:06.716338 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:06.731051 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:06.731076 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:06.809014 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:06.809045 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:06.809064 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:06.293552 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:08.294453 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:10.295216 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:06.674445 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:09.174349 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:09.207784 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:11.208318 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:09.387843 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:09.401894 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:09.401984 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:09.439385 1039759 cri.go:89] found id: ""
	I0729 14:40:09.439425 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.439438 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:09.439446 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:09.439506 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:09.474307 1039759 cri.go:89] found id: ""
	I0729 14:40:09.474340 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.474352 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:09.474361 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:09.474434 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:09.508198 1039759 cri.go:89] found id: ""
	I0729 14:40:09.508233 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.508245 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:09.508253 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:09.508325 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:09.543729 1039759 cri.go:89] found id: ""
	I0729 14:40:09.543762 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.543772 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:09.543779 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:09.543847 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:09.598723 1039759 cri.go:89] found id: ""
	I0729 14:40:09.598760 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.598769 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:09.598775 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:09.598841 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:09.636009 1039759 cri.go:89] found id: ""
	I0729 14:40:09.636038 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.636050 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:09.636058 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:09.636126 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:09.675590 1039759 cri.go:89] found id: ""
	I0729 14:40:09.675618 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.675628 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:09.675636 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:09.675698 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:09.710331 1039759 cri.go:89] found id: ""
	I0729 14:40:09.710374 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.710385 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:09.710397 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:09.710416 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:09.790014 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:09.790046 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:09.790064 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:09.870233 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:09.870278 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:09.910421 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:09.910454 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:09.962429 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:09.962474 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:12.476775 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:12.490804 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:12.490875 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:12.529435 1039759 cri.go:89] found id: ""
	I0729 14:40:12.529466 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.529478 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:12.529485 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:12.529551 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:12.564769 1039759 cri.go:89] found id: ""
	I0729 14:40:12.564806 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.564818 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:12.564826 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:12.564912 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:12.600253 1039759 cri.go:89] found id: ""
	I0729 14:40:12.600285 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.600296 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:12.600304 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:12.600367 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:12.636112 1039759 cri.go:89] found id: ""
	I0729 14:40:12.636146 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.636155 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:12.636161 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:12.636216 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:12.675592 1039759 cri.go:89] found id: ""
	I0729 14:40:12.675621 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.675631 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:12.675639 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:12.675711 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:12.711438 1039759 cri.go:89] found id: ""
	I0729 14:40:12.711469 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.711480 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:12.711488 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:12.711554 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:12.745497 1039759 cri.go:89] found id: ""
	I0729 14:40:12.745524 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.745533 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:12.745539 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:12.745598 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:12.778934 1039759 cri.go:89] found id: ""
	I0729 14:40:12.778966 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.778977 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:12.778991 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:12.779010 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:12.854721 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:12.854759 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:12.854780 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:12.932118 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:12.932158 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:12.974429 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:12.974461 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:13.030073 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:13.030108 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:12.795056 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:15.295125 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:11.674169 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:14.173503 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:16.175691 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:13.707268 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:15.708540 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:15.544245 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:15.559013 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:15.559090 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:15.594018 1039759 cri.go:89] found id: ""
	I0729 14:40:15.594051 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.594064 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:15.594076 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:15.594147 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:15.630734 1039759 cri.go:89] found id: ""
	I0729 14:40:15.630762 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.630771 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:15.630777 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:15.630832 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:15.666159 1039759 cri.go:89] found id: ""
	I0729 14:40:15.666191 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.666202 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:15.666210 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:15.666275 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:15.701058 1039759 cri.go:89] found id: ""
	I0729 14:40:15.701088 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.701098 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:15.701115 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:15.701170 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:15.737006 1039759 cri.go:89] found id: ""
	I0729 14:40:15.737040 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.737052 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:15.737066 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:15.737139 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:15.775678 1039759 cri.go:89] found id: ""
	I0729 14:40:15.775706 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.775718 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:15.775728 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:15.775795 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:15.812239 1039759 cri.go:89] found id: ""
	I0729 14:40:15.812268 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.812276 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:15.812283 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:15.812348 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:15.847653 1039759 cri.go:89] found id: ""
	I0729 14:40:15.847682 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.847693 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:15.847707 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:15.847725 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:15.903094 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:15.903137 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:15.917060 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:15.917093 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:15.993458 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:15.993481 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:15.993499 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:16.073369 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:16.073409 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:18.616107 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:18.630263 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:18.630340 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:18.668228 1039759 cri.go:89] found id: ""
	I0729 14:40:18.668261 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.668271 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:18.668279 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:18.668342 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:18.706863 1039759 cri.go:89] found id: ""
	I0729 14:40:18.706891 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.706902 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:18.706909 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:18.706978 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:18.739703 1039759 cri.go:89] found id: ""
	I0729 14:40:18.739728 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.739736 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:18.739742 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:18.739796 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:18.777025 1039759 cri.go:89] found id: ""
	I0729 14:40:18.777066 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.777077 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:18.777085 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:18.777158 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:18.814000 1039759 cri.go:89] found id: ""
	I0729 14:40:18.814026 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.814039 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:18.814051 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:18.814116 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:18.851027 1039759 cri.go:89] found id: ""
	I0729 14:40:18.851058 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.851069 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:18.851076 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:18.851151 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:17.796245 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:20.293964 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:18.673560 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:21.173099 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:18.207376 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:20.707629 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:18.903888 1039759 cri.go:89] found id: ""
	I0729 14:40:18.903920 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.903932 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:18.903941 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:18.904002 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:18.938756 1039759 cri.go:89] found id: ""
	I0729 14:40:18.938784 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.938791 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:18.938801 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:18.938814 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:18.988482 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:18.988520 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:19.002145 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:19.002177 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:19.085372 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:19.085397 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:19.085424 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:19.171294 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:19.171387 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:21.709578 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:21.722874 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:21.722941 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:21.768110 1039759 cri.go:89] found id: ""
	I0729 14:40:21.768139 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.768150 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:21.768156 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:21.768210 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:21.808853 1039759 cri.go:89] found id: ""
	I0729 14:40:21.808886 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.808897 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:21.808905 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:21.808974 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:21.843432 1039759 cri.go:89] found id: ""
	I0729 14:40:21.843472 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.843484 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:21.843493 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:21.843576 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:21.876497 1039759 cri.go:89] found id: ""
	I0729 14:40:21.876535 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.876547 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:21.876555 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:21.876633 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:21.911528 1039759 cri.go:89] found id: ""
	I0729 14:40:21.911556 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.911565 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:21.911571 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:21.911626 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:21.944514 1039759 cri.go:89] found id: ""
	I0729 14:40:21.944548 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.944560 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:21.944569 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:21.944641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:21.978113 1039759 cri.go:89] found id: ""
	I0729 14:40:21.978151 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.978162 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:21.978170 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:21.978233 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:22.012390 1039759 cri.go:89] found id: ""
	I0729 14:40:22.012438 1039759 logs.go:276] 0 containers: []
	W0729 14:40:22.012449 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:22.012461 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:22.012484 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:22.027921 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:22.027952 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:22.095087 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:22.095115 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:22.095132 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:22.178462 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:22.178495 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:22.220155 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:22.220188 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:22.794431 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:25.295391 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:23.174050 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:25.673437 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:22.708012 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:25.207491 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:24.771932 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:24.784764 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:24.784851 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:24.820445 1039759 cri.go:89] found id: ""
	I0729 14:40:24.820473 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.820485 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:24.820501 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:24.820569 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:24.854753 1039759 cri.go:89] found id: ""
	I0729 14:40:24.854786 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.854796 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:24.854802 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:24.854856 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:24.889200 1039759 cri.go:89] found id: ""
	I0729 14:40:24.889230 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.889242 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:24.889250 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:24.889312 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:24.932383 1039759 cri.go:89] found id: ""
	I0729 14:40:24.932435 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.932447 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:24.932454 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:24.932515 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:24.971830 1039759 cri.go:89] found id: ""
	I0729 14:40:24.971859 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.971871 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:24.971879 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:24.971936 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:25.014336 1039759 cri.go:89] found id: ""
	I0729 14:40:25.014374 1039759 logs.go:276] 0 containers: []
	W0729 14:40:25.014386 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:25.014397 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:25.014464 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:25.048131 1039759 cri.go:89] found id: ""
	I0729 14:40:25.048161 1039759 logs.go:276] 0 containers: []
	W0729 14:40:25.048171 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:25.048177 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:25.048232 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:25.089830 1039759 cri.go:89] found id: ""
	I0729 14:40:25.089866 1039759 logs.go:276] 0 containers: []
	W0729 14:40:25.089878 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:25.089893 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:25.089907 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:25.172078 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:25.172113 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:25.221629 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:25.221661 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:25.291761 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:25.291806 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:25.314521 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:25.314559 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:25.402738 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:27.903335 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:27.918335 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:27.918413 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:27.951929 1039759 cri.go:89] found id: ""
	I0729 14:40:27.951955 1039759 logs.go:276] 0 containers: []
	W0729 14:40:27.951966 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:27.951972 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:27.952029 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:27.986229 1039759 cri.go:89] found id: ""
	I0729 14:40:27.986266 1039759 logs.go:276] 0 containers: []
	W0729 14:40:27.986279 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:27.986287 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:27.986352 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:28.019467 1039759 cri.go:89] found id: ""
	I0729 14:40:28.019504 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.019517 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:28.019524 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:28.019590 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:28.053762 1039759 cri.go:89] found id: ""
	I0729 14:40:28.053790 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.053799 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:28.053806 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:28.053858 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:28.088947 1039759 cri.go:89] found id: ""
	I0729 14:40:28.088975 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.088989 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:28.088996 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:28.089070 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:28.130018 1039759 cri.go:89] found id: ""
	I0729 14:40:28.130052 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.130064 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:28.130072 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:28.130143 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:28.163682 1039759 cri.go:89] found id: ""
	I0729 14:40:28.163715 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.163725 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:28.163734 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:28.163802 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:28.199698 1039759 cri.go:89] found id: ""
	I0729 14:40:28.199732 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.199744 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:28.199757 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:28.199774 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:28.253735 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:28.253776 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:28.267786 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:28.267825 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:28.337218 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:28.337250 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:28.337265 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:28.419644 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:28.419688 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:27.793963 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:30.293775 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:28.172846 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:30.173544 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:27.707884 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:29.708174 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:32.206661 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:30.958707 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:30.972073 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:30.972146 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:31.016629 1039759 cri.go:89] found id: ""
	I0729 14:40:31.016662 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.016673 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:31.016681 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:31.016747 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:31.058891 1039759 cri.go:89] found id: ""
	I0729 14:40:31.058921 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.058930 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:31.058936 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:31.059004 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:31.096599 1039759 cri.go:89] found id: ""
	I0729 14:40:31.096633 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.096645 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:31.096654 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:31.096741 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:31.143525 1039759 cri.go:89] found id: ""
	I0729 14:40:31.143554 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.143562 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:31.143568 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:31.143628 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:31.180188 1039759 cri.go:89] found id: ""
	I0729 14:40:31.180220 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.180230 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:31.180239 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:31.180310 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:31.219995 1039759 cri.go:89] found id: ""
	I0729 14:40:31.220026 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.220037 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:31.220045 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:31.220108 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:31.254137 1039759 cri.go:89] found id: ""
	I0729 14:40:31.254182 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.254192 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:31.254200 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:31.254272 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:31.288065 1039759 cri.go:89] found id: ""
	I0729 14:40:31.288098 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.288109 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:31.288122 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:31.288137 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:31.341299 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:31.341338 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:31.355357 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:31.355387 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:31.427414 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:31.427439 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:31.427453 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:31.508372 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:31.508439 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:32.294256 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:34.295131 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:32.174315 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:34.674462 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:34.208183 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:36.707763 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:34.052770 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:34.066300 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:34.066366 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:34.104242 1039759 cri.go:89] found id: ""
	I0729 14:40:34.104278 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.104290 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:34.104299 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:34.104367 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:34.143092 1039759 cri.go:89] found id: ""
	I0729 14:40:34.143125 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.143137 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:34.143145 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:34.143216 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:34.177966 1039759 cri.go:89] found id: ""
	I0729 14:40:34.177993 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.178002 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:34.178011 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:34.178098 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:34.218325 1039759 cri.go:89] found id: ""
	I0729 14:40:34.218351 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.218361 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:34.218369 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:34.218437 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:34.256632 1039759 cri.go:89] found id: ""
	I0729 14:40:34.256665 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.256675 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:34.256683 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:34.256753 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:34.290713 1039759 cri.go:89] found id: ""
	I0729 14:40:34.290739 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.290747 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:34.290753 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:34.290816 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:34.331345 1039759 cri.go:89] found id: ""
	I0729 14:40:34.331378 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.331389 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:34.331397 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:34.331468 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:34.370184 1039759 cri.go:89] found id: ""
	I0729 14:40:34.370214 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.370226 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:34.370239 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:34.370256 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:34.448667 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:34.448709 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:34.492943 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:34.492974 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:34.548784 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:34.548827 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:34.565353 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:34.565389 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:34.639411 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:37.140039 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:37.153732 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:37.153806 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:37.189360 1039759 cri.go:89] found id: ""
	I0729 14:40:37.189389 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.189398 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:37.189404 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:37.189474 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:37.225790 1039759 cri.go:89] found id: ""
	I0729 14:40:37.225820 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.225831 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:37.225839 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:37.225914 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:37.261742 1039759 cri.go:89] found id: ""
	I0729 14:40:37.261772 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.261782 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:37.261791 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:37.261862 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:37.295791 1039759 cri.go:89] found id: ""
	I0729 14:40:37.295826 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.295835 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:37.295843 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:37.295908 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:37.331290 1039759 cri.go:89] found id: ""
	I0729 14:40:37.331324 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.331334 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:37.331343 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:37.331413 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:37.366150 1039759 cri.go:89] found id: ""
	I0729 14:40:37.366183 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.366195 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:37.366203 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:37.366273 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:37.400983 1039759 cri.go:89] found id: ""
	I0729 14:40:37.401019 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.401030 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:37.401038 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:37.401110 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:37.435333 1039759 cri.go:89] found id: ""
	I0729 14:40:37.435368 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.435379 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:37.435391 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:37.435407 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:37.488020 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:37.488057 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:37.501543 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:37.501573 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:37.576006 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:37.576033 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:37.576050 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:37.658600 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:37.658641 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:36.794615 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:38.795414 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:37.175174 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:39.674361 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:39.207946 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:41.707724 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:40.200763 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:40.216048 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:40.216121 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:40.253969 1039759 cri.go:89] found id: ""
	I0729 14:40:40.253996 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.254005 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:40.254012 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:40.254078 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:40.289557 1039759 cri.go:89] found id: ""
	I0729 14:40:40.289595 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.289608 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:40.289616 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:40.289698 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:40.329756 1039759 cri.go:89] found id: ""
	I0729 14:40:40.329799 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.329823 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:40.329833 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:40.329906 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:40.365281 1039759 cri.go:89] found id: ""
	I0729 14:40:40.365315 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.365327 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:40.365335 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:40.365403 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:40.401300 1039759 cri.go:89] found id: ""
	I0729 14:40:40.401327 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.401336 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:40.401342 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:40.401398 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:40.435679 1039759 cri.go:89] found id: ""
	I0729 14:40:40.435710 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.435719 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:40.435726 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:40.435781 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:40.475825 1039759 cri.go:89] found id: ""
	I0729 14:40:40.475851 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.475859 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:40.475866 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:40.475926 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:40.512153 1039759 cri.go:89] found id: ""
	I0729 14:40:40.512184 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.512191 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:40.512202 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:40.512215 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:40.563983 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:40.564022 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:40.578823 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:40.578853 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:40.650282 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:40.650311 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:40.650328 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:40.734933 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:40.734980 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:43.280095 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:43.294284 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:43.294361 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:43.328862 1039759 cri.go:89] found id: ""
	I0729 14:40:43.328890 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.328899 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:43.328905 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:43.328971 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:43.366321 1039759 cri.go:89] found id: ""
	I0729 14:40:43.366364 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.366376 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:43.366384 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:43.366459 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:43.400189 1039759 cri.go:89] found id: ""
	I0729 14:40:43.400220 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.400229 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:43.400235 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:43.400299 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:43.438521 1039759 cri.go:89] found id: ""
	I0729 14:40:43.438562 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.438582 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:43.438594 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:43.438665 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:43.473931 1039759 cri.go:89] found id: ""
	I0729 14:40:43.473958 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.473966 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:43.473972 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:43.474035 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:43.511460 1039759 cri.go:89] found id: ""
	I0729 14:40:43.511490 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.511497 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:43.511506 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:43.511563 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:43.547255 1039759 cri.go:89] found id: ""
	I0729 14:40:43.547290 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.547301 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:43.547309 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:43.547375 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:43.582384 1039759 cri.go:89] found id: ""
	I0729 14:40:43.582418 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.582428 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:43.582441 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:43.582459 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:43.595747 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:43.595780 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:43.665389 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:43.665413 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:43.665427 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:43.752669 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:43.752712 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:43.797239 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:43.797272 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:41.294242 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:43.294985 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:45.794449 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:42.173495 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:44.173830 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:44.207593 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:46.706855 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:46.352841 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:46.368204 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:46.368278 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:46.406661 1039759 cri.go:89] found id: ""
	I0729 14:40:46.406687 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.406695 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:46.406701 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:46.406761 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:46.443728 1039759 cri.go:89] found id: ""
	I0729 14:40:46.443760 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.443771 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:46.443778 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:46.443845 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:46.477632 1039759 cri.go:89] found id: ""
	I0729 14:40:46.477666 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.477677 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:46.477686 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:46.477754 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:46.512510 1039759 cri.go:89] found id: ""
	I0729 14:40:46.512538 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.512549 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:46.512557 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:46.512629 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:46.550803 1039759 cri.go:89] found id: ""
	I0729 14:40:46.550834 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.550843 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:46.550848 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:46.550914 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:46.591610 1039759 cri.go:89] found id: ""
	I0729 14:40:46.591640 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.591652 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:46.591661 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:46.591723 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:46.631090 1039759 cri.go:89] found id: ""
	I0729 14:40:46.631122 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.631132 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:46.631139 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:46.631199 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:46.670510 1039759 cri.go:89] found id: ""
	I0729 14:40:46.670542 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.670554 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:46.670573 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:46.670590 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:46.725560 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:46.725594 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:46.739348 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:46.739372 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:46.812850 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:46.812874 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:46.812892 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:46.892922 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:46.892964 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:47.795538 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:50.293685 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:46.674514 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:49.174577 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:48.708243 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:51.207168 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:49.438741 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:49.452505 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:49.452588 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:49.487294 1039759 cri.go:89] found id: ""
	I0729 14:40:49.487323 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.487331 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:49.487340 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:49.487407 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:49.521783 1039759 cri.go:89] found id: ""
	I0729 14:40:49.521816 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.521828 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:49.521836 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:49.521901 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:49.557039 1039759 cri.go:89] found id: ""
	I0729 14:40:49.557075 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.557086 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:49.557094 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:49.557162 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:49.590431 1039759 cri.go:89] found id: ""
	I0729 14:40:49.590462 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.590474 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:49.590494 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:49.590574 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:49.626230 1039759 cri.go:89] found id: ""
	I0729 14:40:49.626260 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.626268 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:49.626274 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:49.626339 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:49.662030 1039759 cri.go:89] found id: ""
	I0729 14:40:49.662060 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.662068 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:49.662075 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:49.662130 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:49.699988 1039759 cri.go:89] found id: ""
	I0729 14:40:49.700019 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.700035 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:49.700076 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:49.700144 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:49.736830 1039759 cri.go:89] found id: ""
	I0729 14:40:49.736864 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.736873 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:49.736882 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:49.736895 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:49.775670 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:49.775703 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:49.830820 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:49.830853 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:49.846374 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:49.846407 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:49.917475 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:49.917502 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:49.917520 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:52.499291 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:52.513571 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:52.513641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:52.547437 1039759 cri.go:89] found id: ""
	I0729 14:40:52.547474 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.547487 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:52.547495 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:52.547559 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:52.587664 1039759 cri.go:89] found id: ""
	I0729 14:40:52.587705 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.587718 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:52.587726 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:52.587799 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:52.630642 1039759 cri.go:89] found id: ""
	I0729 14:40:52.630670 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.630678 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:52.630684 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:52.630740 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:52.665978 1039759 cri.go:89] found id: ""
	I0729 14:40:52.666010 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.666022 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:52.666030 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:52.666103 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:52.701111 1039759 cri.go:89] found id: ""
	I0729 14:40:52.701140 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.701148 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:52.701155 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:52.701211 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:52.744219 1039759 cri.go:89] found id: ""
	I0729 14:40:52.744247 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.744257 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:52.744265 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:52.744329 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:52.781081 1039759 cri.go:89] found id: ""
	I0729 14:40:52.781113 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.781122 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:52.781128 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:52.781198 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:52.817938 1039759 cri.go:89] found id: ""
	I0729 14:40:52.817974 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.817985 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:52.817999 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:52.818016 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:52.895387 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:52.895416 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:52.895433 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:52.976313 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:52.976356 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:53.013814 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:53.013852 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:53.065901 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:53.065937 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:52.798083 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:55.293459 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:51.674103 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:54.174456 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:53.208082 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:55.707719 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:55.580590 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:55.595023 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:55.595108 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:55.631449 1039759 cri.go:89] found id: ""
	I0729 14:40:55.631479 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.631487 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:55.631494 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:55.631551 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:55.666245 1039759 cri.go:89] found id: ""
	I0729 14:40:55.666274 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.666283 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:55.666296 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:55.666364 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:55.706582 1039759 cri.go:89] found id: ""
	I0729 14:40:55.706611 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.706621 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:55.706629 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:55.706696 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:55.741930 1039759 cri.go:89] found id: ""
	I0729 14:40:55.741962 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.741973 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:55.741990 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:55.742058 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:55.781440 1039759 cri.go:89] found id: ""
	I0729 14:40:55.781475 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.781486 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:55.781494 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:55.781599 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:55.825329 1039759 cri.go:89] found id: ""
	I0729 14:40:55.825366 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.825377 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:55.825387 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:55.825466 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:55.860834 1039759 cri.go:89] found id: ""
	I0729 14:40:55.860866 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.860878 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:55.860886 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:55.860950 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:55.895460 1039759 cri.go:89] found id: ""
	I0729 14:40:55.895492 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.895502 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:55.895514 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:55.895531 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:55.951739 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:55.951781 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:55.965760 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:55.965792 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:56.044422 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:56.044458 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:56.044477 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:56.123669 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:56.123714 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:58.668279 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:58.682912 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:58.682974 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:58.718757 1039759 cri.go:89] found id: ""
	I0729 14:40:58.718787 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.718798 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:58.718807 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:58.718861 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:58.756986 1039759 cri.go:89] found id: ""
	I0729 14:40:58.757015 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.757025 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:58.757031 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:58.757092 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:58.797572 1039759 cri.go:89] found id: ""
	I0729 14:40:58.797600 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.797611 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:58.797620 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:58.797689 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:58.839410 1039759 cri.go:89] found id: ""
	I0729 14:40:58.839442 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.839453 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:58.839461 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:58.839523 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:57.293935 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:59.294805 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:56.673078 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:58.674177 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:01.173709 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:57.708051 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:00.207822 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:02.208128 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:58.874477 1039759 cri.go:89] found id: ""
	I0729 14:40:58.874508 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.874519 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:58.874528 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:58.874602 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:58.910248 1039759 cri.go:89] found id: ""
	I0729 14:40:58.910281 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.910296 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:58.910307 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:58.910368 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:58.944845 1039759 cri.go:89] found id: ""
	I0729 14:40:58.944879 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.944890 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:58.944896 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:58.944955 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:58.978818 1039759 cri.go:89] found id: ""
	I0729 14:40:58.978854 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.978867 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:58.978879 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:58.978898 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:59.018961 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:59.018993 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:59.069883 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:59.069920 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:59.083277 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:59.083304 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:59.159470 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:59.159494 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:59.159511 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:01.746915 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:01.759883 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:01.759949 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:01.796563 1039759 cri.go:89] found id: ""
	I0729 14:41:01.796589 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.796602 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:01.796608 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:01.796691 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:01.831464 1039759 cri.go:89] found id: ""
	I0729 14:41:01.831499 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.831511 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:01.831520 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:01.831586 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:01.868633 1039759 cri.go:89] found id: ""
	I0729 14:41:01.868660 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.868668 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:01.868674 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:01.868732 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:01.903154 1039759 cri.go:89] found id: ""
	I0729 14:41:01.903183 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.903194 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:01.903202 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:01.903272 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:01.938256 1039759 cri.go:89] found id: ""
	I0729 14:41:01.938292 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.938304 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:01.938312 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:01.938384 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:01.978117 1039759 cri.go:89] found id: ""
	I0729 14:41:01.978147 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.978159 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:01.978168 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:01.978242 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:02.014061 1039759 cri.go:89] found id: ""
	I0729 14:41:02.014089 1039759 logs.go:276] 0 containers: []
	W0729 14:41:02.014100 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:02.014108 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:02.014176 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:02.050133 1039759 cri.go:89] found id: ""
	I0729 14:41:02.050165 1039759 logs.go:276] 0 containers: []
	W0729 14:41:02.050177 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:02.050189 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:02.050206 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:02.101188 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:02.101253 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:02.114343 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:02.114369 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:02.190309 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:02.190338 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:02.190354 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:02.266895 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:02.266939 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:01.794976 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:04.295199 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:03.176713 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:05.673543 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:04.708032 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:07.207702 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:04.809474 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:04.824652 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:04.824725 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:04.858442 1039759 cri.go:89] found id: ""
	I0729 14:41:04.858474 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.858483 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:04.858490 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:04.858542 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:04.893199 1039759 cri.go:89] found id: ""
	I0729 14:41:04.893229 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.893237 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:04.893243 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:04.893297 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:04.929480 1039759 cri.go:89] found id: ""
	I0729 14:41:04.929512 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.929524 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:04.929532 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:04.929601 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:04.965097 1039759 cri.go:89] found id: ""
	I0729 14:41:04.965127 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.965139 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:04.965147 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:04.965228 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:05.003419 1039759 cri.go:89] found id: ""
	I0729 14:41:05.003449 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.003460 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:05.003467 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:05.003557 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:05.037408 1039759 cri.go:89] found id: ""
	I0729 14:41:05.037439 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.037451 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:05.037458 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:05.037527 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:05.072909 1039759 cri.go:89] found id: ""
	I0729 14:41:05.072942 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.072953 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:05.072961 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:05.073034 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:05.123731 1039759 cri.go:89] found id: ""
	I0729 14:41:05.123764 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.123776 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:05.123787 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:05.123802 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:05.188687 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:05.188732 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:05.204119 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:05.204160 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:05.294702 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:05.294732 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:05.294750 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:05.377412 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:05.377456 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:07.923437 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:07.937633 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:07.937711 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:07.976813 1039759 cri.go:89] found id: ""
	I0729 14:41:07.976850 1039759 logs.go:276] 0 containers: []
	W0729 14:41:07.976861 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:07.976872 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:07.976946 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:08.013051 1039759 cri.go:89] found id: ""
	I0729 14:41:08.013089 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.013100 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:08.013109 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:08.013177 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:08.047372 1039759 cri.go:89] found id: ""
	I0729 14:41:08.047404 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.047413 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:08.047420 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:08.047477 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:08.080555 1039759 cri.go:89] found id: ""
	I0729 14:41:08.080594 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.080607 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:08.080615 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:08.080684 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:08.117054 1039759 cri.go:89] found id: ""
	I0729 14:41:08.117087 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.117098 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:08.117106 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:08.117175 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:08.152270 1039759 cri.go:89] found id: ""
	I0729 14:41:08.152295 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.152303 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:08.152309 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:08.152373 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:08.188804 1039759 cri.go:89] found id: ""
	I0729 14:41:08.188830 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.188842 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:08.188848 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:08.188903 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:08.225101 1039759 cri.go:89] found id: ""
	I0729 14:41:08.225139 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.225151 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:08.225164 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:08.225182 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:08.278721 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:08.278759 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:08.293417 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:08.293453 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:08.371802 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:08.371825 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:08.371843 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:08.452233 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:08.452274 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:06.795598 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:09.294006 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:08.175147 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:10.673937 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:09.707777 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:12.208180 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:10.993379 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:11.007599 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:11.007668 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:11.045603 1039759 cri.go:89] found id: ""
	I0729 14:41:11.045652 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.045675 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:11.045683 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:11.045746 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:11.079682 1039759 cri.go:89] found id: ""
	I0729 14:41:11.079711 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.079722 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:11.079730 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:11.079797 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:11.122138 1039759 cri.go:89] found id: ""
	I0729 14:41:11.122167 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.122180 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:11.122185 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:11.122249 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:11.157416 1039759 cri.go:89] found id: ""
	I0729 14:41:11.157444 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.157452 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:11.157458 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:11.157514 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:11.198589 1039759 cri.go:89] found id: ""
	I0729 14:41:11.198631 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.198643 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:11.198652 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:11.198725 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:11.238329 1039759 cri.go:89] found id: ""
	I0729 14:41:11.238360 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.238369 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:11.238376 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:11.238442 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:11.273283 1039759 cri.go:89] found id: ""
	I0729 14:41:11.273313 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.273322 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:11.273328 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:11.273382 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:11.313927 1039759 cri.go:89] found id: ""
	I0729 14:41:11.313972 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.313984 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:11.313997 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:11.314014 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:11.366507 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:11.366546 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:11.380529 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:11.380566 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:11.451839 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:11.451862 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:11.451882 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:11.537109 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:11.537150 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:11.294967 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:13.793738 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:13.173482 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:15.673025 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:14.706708 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:16.707135 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:14.104794 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:14.117474 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:14.117541 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:14.154117 1039759 cri.go:89] found id: ""
	I0729 14:41:14.154151 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.154163 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:14.154171 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:14.154236 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:14.195762 1039759 cri.go:89] found id: ""
	I0729 14:41:14.195793 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.195804 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:14.195812 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:14.195875 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:14.231434 1039759 cri.go:89] found id: ""
	I0729 14:41:14.231460 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.231467 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:14.231474 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:14.231523 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:14.264802 1039759 cri.go:89] found id: ""
	I0729 14:41:14.264839 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.264851 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:14.264859 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:14.264932 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:14.300162 1039759 cri.go:89] found id: ""
	I0729 14:41:14.300184 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.300194 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:14.300202 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:14.300262 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:14.335351 1039759 cri.go:89] found id: ""
	I0729 14:41:14.335385 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.335396 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:14.335404 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:14.335468 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:14.370064 1039759 cri.go:89] found id: ""
	I0729 14:41:14.370096 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.370107 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:14.370115 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:14.370184 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:14.406506 1039759 cri.go:89] found id: ""
	I0729 14:41:14.406538 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.406549 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:14.406562 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:14.406579 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:14.445641 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:14.445681 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:14.496132 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:14.496165 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:14.509732 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:14.509767 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:14.581519 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:14.581541 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:14.581558 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:17.164487 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:17.178359 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:17.178447 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:17.213780 1039759 cri.go:89] found id: ""
	I0729 14:41:17.213869 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.213887 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:17.213896 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:17.213966 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:17.251006 1039759 cri.go:89] found id: ""
	I0729 14:41:17.251045 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.251056 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:17.251063 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:17.251135 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:17.306624 1039759 cri.go:89] found id: ""
	I0729 14:41:17.306654 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.306683 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:17.306691 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:17.306775 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:17.358882 1039759 cri.go:89] found id: ""
	I0729 14:41:17.358915 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.358927 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:17.358935 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:17.359008 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:17.408592 1039759 cri.go:89] found id: ""
	I0729 14:41:17.408620 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.408636 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:17.408642 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:17.408705 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:17.445201 1039759 cri.go:89] found id: ""
	I0729 14:41:17.445228 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.445236 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:17.445242 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:17.445305 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:17.477441 1039759 cri.go:89] found id: ""
	I0729 14:41:17.477483 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.477511 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:17.477518 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:17.477591 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:17.509148 1039759 cri.go:89] found id: ""
	I0729 14:41:17.509179 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.509190 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:17.509203 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:17.509220 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:17.559784 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:17.559823 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:17.574163 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:17.574199 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:17.644249 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:17.644277 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:17.644294 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:17.720652 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:17.720688 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:16.293977 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:18.793489 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:20.793760 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:17.674099 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:20.173742 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:18.707238 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:21.209948 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:20.261591 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:20.274649 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:20.274731 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:20.311561 1039759 cri.go:89] found id: ""
	I0729 14:41:20.311591 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.311600 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:20.311606 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:20.311668 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:20.350267 1039759 cri.go:89] found id: ""
	I0729 14:41:20.350300 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.350313 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:20.350322 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:20.350379 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:20.384183 1039759 cri.go:89] found id: ""
	I0729 14:41:20.384213 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.384220 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:20.384227 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:20.384288 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:20.422330 1039759 cri.go:89] found id: ""
	I0729 14:41:20.422358 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.422367 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:20.422373 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:20.422442 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:20.465537 1039759 cri.go:89] found id: ""
	I0729 14:41:20.465568 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.465577 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:20.465586 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:20.465663 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:20.507661 1039759 cri.go:89] found id: ""
	I0729 14:41:20.507691 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.507701 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:20.507710 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:20.507774 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:20.545830 1039759 cri.go:89] found id: ""
	I0729 14:41:20.545857 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.545866 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:20.545872 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:20.545936 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:20.586311 1039759 cri.go:89] found id: ""
	I0729 14:41:20.586345 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.586354 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:20.586364 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:20.586379 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:20.635183 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:20.635224 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:20.649660 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:20.649701 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:20.729588 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:20.729613 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:20.729632 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:20.811565 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:20.811605 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:23.354318 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:23.367784 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:23.367862 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:23.401929 1039759 cri.go:89] found id: ""
	I0729 14:41:23.401956 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.401965 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:23.401970 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:23.402033 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:23.437130 1039759 cri.go:89] found id: ""
	I0729 14:41:23.437161 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.437185 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:23.437205 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:23.437267 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:23.474029 1039759 cri.go:89] found id: ""
	I0729 14:41:23.474066 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.474078 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:23.474087 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:23.474159 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:23.506678 1039759 cri.go:89] found id: ""
	I0729 14:41:23.506714 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.506725 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:23.506732 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:23.506791 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:23.541578 1039759 cri.go:89] found id: ""
	I0729 14:41:23.541618 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.541628 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:23.541636 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:23.541709 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:23.575852 1039759 cri.go:89] found id: ""
	I0729 14:41:23.575883 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.575891 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:23.575898 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:23.575955 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:23.610611 1039759 cri.go:89] found id: ""
	I0729 14:41:23.610638 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.610646 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:23.610653 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:23.610717 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:23.650403 1039759 cri.go:89] found id: ""
	I0729 14:41:23.650429 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.650438 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:23.650448 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:23.650460 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:23.701856 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:23.701899 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:23.716925 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:23.716958 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:23.790678 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:23.790699 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:23.790717 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:23.873204 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:23.873242 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:22.794021 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:25.294289 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:22.173787 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:24.673139 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:23.708892 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:26.207121 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:26.414319 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:26.428069 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:26.428152 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:26.462538 1039759 cri.go:89] found id: ""
	I0729 14:41:26.462578 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.462590 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:26.462599 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:26.462687 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:26.496461 1039759 cri.go:89] found id: ""
	I0729 14:41:26.496501 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.496513 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:26.496521 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:26.496593 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:26.534152 1039759 cri.go:89] found id: ""
	I0729 14:41:26.534190 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.534203 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:26.534210 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:26.534273 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:26.572986 1039759 cri.go:89] found id: ""
	I0729 14:41:26.573016 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.573024 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:26.573030 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:26.573097 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:26.607330 1039759 cri.go:89] found id: ""
	I0729 14:41:26.607359 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.607370 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:26.607378 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:26.607445 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:26.643023 1039759 cri.go:89] found id: ""
	I0729 14:41:26.643056 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.643067 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:26.643078 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:26.643145 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:26.679820 1039759 cri.go:89] found id: ""
	I0729 14:41:26.679846 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.679856 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:26.679865 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:26.679930 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:26.716433 1039759 cri.go:89] found id: ""
	I0729 14:41:26.716462 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.716470 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:26.716480 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:26.716494 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:26.794508 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:26.794529 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:26.794542 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:26.876663 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:26.876701 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:26.917309 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:26.917343 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:26.969397 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:26.969436 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:27.294711 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:29.793946 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:26.679220 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:29.173259 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:31.175213 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:28.207613 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:30.707297 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:29.483935 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:29.497502 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:29.497585 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:29.532671 1039759 cri.go:89] found id: ""
	I0729 14:41:29.532698 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.532712 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:29.532719 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:29.532784 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:29.568058 1039759 cri.go:89] found id: ""
	I0729 14:41:29.568085 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.568096 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:29.568103 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:29.568176 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:29.601173 1039759 cri.go:89] found id: ""
	I0729 14:41:29.601206 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.601216 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:29.601225 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:29.601284 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:29.634333 1039759 cri.go:89] found id: ""
	I0729 14:41:29.634372 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.634384 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:29.634393 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:29.634460 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:29.669669 1039759 cri.go:89] found id: ""
	I0729 14:41:29.669698 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.669706 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:29.669712 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:29.669777 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:29.702847 1039759 cri.go:89] found id: ""
	I0729 14:41:29.702876 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.702886 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:29.702894 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:29.702960 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:29.740713 1039759 cri.go:89] found id: ""
	I0729 14:41:29.740743 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.740754 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:29.740762 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:29.740846 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:29.777795 1039759 cri.go:89] found id: ""
	I0729 14:41:29.777829 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.777841 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:29.777853 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:29.777869 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:29.858713 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:29.858758 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:29.896873 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:29.896914 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:29.946905 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:29.946945 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:29.960136 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:29.960170 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:30.035951 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:32.536130 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:32.549431 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:32.549501 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:32.586069 1039759 cri.go:89] found id: ""
	I0729 14:41:32.586098 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.586117 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:32.586125 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:32.586183 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:32.623094 1039759 cri.go:89] found id: ""
	I0729 14:41:32.623123 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.623132 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:32.623138 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:32.623205 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:32.658370 1039759 cri.go:89] found id: ""
	I0729 14:41:32.658406 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.658418 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:32.658426 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:32.658492 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:32.696436 1039759 cri.go:89] found id: ""
	I0729 14:41:32.696469 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.696478 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:32.696484 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:32.696551 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:32.731306 1039759 cri.go:89] found id: ""
	I0729 14:41:32.731340 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.731352 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:32.731361 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:32.731431 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:32.767049 1039759 cri.go:89] found id: ""
	I0729 14:41:32.767087 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.767098 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:32.767106 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:32.767179 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:32.805094 1039759 cri.go:89] found id: ""
	I0729 14:41:32.805126 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.805138 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:32.805147 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:32.805223 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:32.840088 1039759 cri.go:89] found id: ""
	I0729 14:41:32.840116 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.840125 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:32.840137 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:32.840155 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:32.854065 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:32.854095 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:32.921447 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:32.921477 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:32.921493 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:33.005086 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:33.005129 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:33.042555 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:33.042617 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:31.795000 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:34.293349 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:33.673734 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:35.674275 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:32.707849 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:35.210238 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:35.593173 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:35.605965 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:35.606031 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:35.639315 1039759 cri.go:89] found id: ""
	I0729 14:41:35.639355 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.639367 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:35.639374 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:35.639466 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:35.678657 1039759 cri.go:89] found id: ""
	I0729 14:41:35.678686 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.678695 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:35.678700 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:35.678764 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:35.714108 1039759 cri.go:89] found id: ""
	I0729 14:41:35.714136 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.714147 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:35.714155 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:35.714220 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:35.748793 1039759 cri.go:89] found id: ""
	I0729 14:41:35.748820 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.748831 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:35.748837 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:35.748891 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:35.788853 1039759 cri.go:89] found id: ""
	I0729 14:41:35.788884 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.788895 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:35.788903 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:35.788971 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:35.825032 1039759 cri.go:89] found id: ""
	I0729 14:41:35.825059 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.825067 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:35.825074 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:35.825126 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:35.859990 1039759 cri.go:89] found id: ""
	I0729 14:41:35.860022 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.860033 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:35.860041 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:35.860131 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:35.894318 1039759 cri.go:89] found id: ""
	I0729 14:41:35.894352 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.894364 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:35.894377 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:35.894393 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:35.907591 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:35.907617 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:35.975000 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:35.975023 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:35.975040 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:36.056188 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:36.056226 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:36.094569 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:36.094606 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:38.648685 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:38.661546 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:38.661612 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:38.698658 1039759 cri.go:89] found id: ""
	I0729 14:41:38.698692 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.698704 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:38.698711 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:38.698797 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:38.731239 1039759 cri.go:89] found id: ""
	I0729 14:41:38.731274 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.731282 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:38.731288 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:38.731341 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:38.766549 1039759 cri.go:89] found id: ""
	I0729 14:41:38.766583 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.766594 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:38.766602 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:38.766663 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:38.803347 1039759 cri.go:89] found id: ""
	I0729 14:41:38.803374 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.803385 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:38.803393 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:38.803467 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:38.840327 1039759 cri.go:89] found id: ""
	I0729 14:41:38.840363 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.840374 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:38.840384 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:38.840480 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:38.874181 1039759 cri.go:89] found id: ""
	I0729 14:41:38.874211 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.874219 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:38.874225 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:38.874293 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:36.297301 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:38.794975 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:38.173718 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:40.675880 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:37.707171 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:39.709125 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:42.206569 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:38.908642 1039759 cri.go:89] found id: ""
	I0729 14:41:38.908674 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.908686 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:38.908694 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:38.908762 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:38.945081 1039759 cri.go:89] found id: ""
	I0729 14:41:38.945107 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.945116 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:38.945126 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:38.945140 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:38.999792 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:38.999826 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:39.013396 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:39.013421 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:39.077975 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:39.077998 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:39.078016 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:39.169606 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:39.169654 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:41.716258 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:41.730508 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:41.730579 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:41.766457 1039759 cri.go:89] found id: ""
	I0729 14:41:41.766490 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.766498 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:41.766505 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:41.766571 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:41.801073 1039759 cri.go:89] found id: ""
	I0729 14:41:41.801099 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.801109 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:41.801117 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:41.801178 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:41.836962 1039759 cri.go:89] found id: ""
	I0729 14:41:41.836986 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.836997 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:41.837005 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:41.837072 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:41.870169 1039759 cri.go:89] found id: ""
	I0729 14:41:41.870195 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.870205 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:41.870213 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:41.870274 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:41.902298 1039759 cri.go:89] found id: ""
	I0729 14:41:41.902323 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.902331 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:41.902337 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:41.902387 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:41.935394 1039759 cri.go:89] found id: ""
	I0729 14:41:41.935429 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.935441 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:41.935449 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:41.935513 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:41.972397 1039759 cri.go:89] found id: ""
	I0729 14:41:41.972437 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.972448 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:41.972456 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:41.972525 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:42.006477 1039759 cri.go:89] found id: ""
	I0729 14:41:42.006503 1039759 logs.go:276] 0 containers: []
	W0729 14:41:42.006513 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:42.006526 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:42.006540 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:42.053853 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:42.053886 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:42.067143 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:42.067172 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:42.135406 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:42.135432 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:42.135449 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:42.212571 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:42.212603 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:41.293241 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:43.294160 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:45.793697 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:43.173087 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:45.174327 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:44.206854 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:46.707167 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:44.751283 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:44.764600 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:44.764688 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:44.800821 1039759 cri.go:89] found id: ""
	I0729 14:41:44.800850 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.800857 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:44.800863 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:44.800924 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:44.834638 1039759 cri.go:89] found id: ""
	I0729 14:41:44.834670 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.834680 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:44.834686 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:44.834744 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:44.870198 1039759 cri.go:89] found id: ""
	I0729 14:41:44.870225 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.870237 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:44.870245 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:44.870312 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:44.904588 1039759 cri.go:89] found id: ""
	I0729 14:41:44.904620 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.904631 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:44.904639 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:44.904713 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:44.939442 1039759 cri.go:89] found id: ""
	I0729 14:41:44.939467 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.939474 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:44.939480 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:44.939541 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:44.972771 1039759 cri.go:89] found id: ""
	I0729 14:41:44.972799 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.972808 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:44.972815 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:44.972888 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:45.007513 1039759 cri.go:89] found id: ""
	I0729 14:41:45.007540 1039759 logs.go:276] 0 containers: []
	W0729 14:41:45.007549 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:45.007557 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:45.007626 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:45.038752 1039759 cri.go:89] found id: ""
	I0729 14:41:45.038778 1039759 logs.go:276] 0 containers: []
	W0729 14:41:45.038787 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:45.038797 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:45.038821 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:45.089807 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:45.089838 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:45.103188 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:45.103221 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:45.174509 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:45.174532 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:45.174554 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:45.255288 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:45.255327 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:47.799207 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:47.814781 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:47.814866 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:47.855111 1039759 cri.go:89] found id: ""
	I0729 14:41:47.855143 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.855156 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:47.855164 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:47.855230 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:47.892542 1039759 cri.go:89] found id: ""
	I0729 14:41:47.892577 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.892589 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:47.892603 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:47.892674 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:47.933408 1039759 cri.go:89] found id: ""
	I0729 14:41:47.933439 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.933451 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:47.933458 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:47.933531 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:47.970397 1039759 cri.go:89] found id: ""
	I0729 14:41:47.970427 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.970439 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:47.970447 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:47.970514 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:48.006852 1039759 cri.go:89] found id: ""
	I0729 14:41:48.006880 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.006891 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:48.006899 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:48.006967 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:48.046766 1039759 cri.go:89] found id: ""
	I0729 14:41:48.046799 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.046811 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:48.046820 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:48.046893 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:48.084354 1039759 cri.go:89] found id: ""
	I0729 14:41:48.084380 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.084387 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:48.084393 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:48.084468 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:48.121526 1039759 cri.go:89] found id: ""
	I0729 14:41:48.121559 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.121571 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:48.121582 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:48.121606 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:48.136753 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:48.136784 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:48.206914 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:48.206942 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:48.206958 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:48.283843 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:48.283882 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:48.325845 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:48.325878 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:47.794096 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:50.295275 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:47.182903 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:49.672827 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:49.206572 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:51.206900 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:50.881346 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:50.894098 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:50.894177 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:50.927345 1039759 cri.go:89] found id: ""
	I0729 14:41:50.927375 1039759 logs.go:276] 0 containers: []
	W0729 14:41:50.927386 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:50.927399 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:50.927466 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:50.962700 1039759 cri.go:89] found id: ""
	I0729 14:41:50.962726 1039759 logs.go:276] 0 containers: []
	W0729 14:41:50.962734 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:50.962740 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:50.962804 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:50.997299 1039759 cri.go:89] found id: ""
	I0729 14:41:50.997334 1039759 logs.go:276] 0 containers: []
	W0729 14:41:50.997346 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:50.997354 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:50.997419 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:51.030157 1039759 cri.go:89] found id: ""
	I0729 14:41:51.030190 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.030202 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:51.030211 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:51.030288 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:51.063123 1039759 cri.go:89] found id: ""
	I0729 14:41:51.063151 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.063162 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:51.063170 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:51.063237 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:51.096772 1039759 cri.go:89] found id: ""
	I0729 14:41:51.096819 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.096830 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:51.096838 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:51.096912 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:51.131976 1039759 cri.go:89] found id: ""
	I0729 14:41:51.132004 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.132014 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:51.132022 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:51.132095 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:51.167560 1039759 cri.go:89] found id: ""
	I0729 14:41:51.167599 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.167610 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:51.167622 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:51.167640 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:51.229416 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:51.229455 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:51.243576 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:51.243604 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:51.311103 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:51.311123 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:51.311139 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:51.396369 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:51.396432 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:52.793981 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:55.294172 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:51.673945 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:54.173681 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:56.174098 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:53.207656 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:55.709310 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:53.942329 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:53.955960 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:53.956027 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:53.988039 1039759 cri.go:89] found id: ""
	I0729 14:41:53.988074 1039759 logs.go:276] 0 containers: []
	W0729 14:41:53.988085 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:53.988094 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:53.988162 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:54.020948 1039759 cri.go:89] found id: ""
	I0729 14:41:54.020981 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.020992 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:54.020999 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:54.021067 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:54.053716 1039759 cri.go:89] found id: ""
	I0729 14:41:54.053744 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.053752 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:54.053759 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:54.053811 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:54.092348 1039759 cri.go:89] found id: ""
	I0729 14:41:54.092378 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.092390 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:54.092398 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:54.092471 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:54.126114 1039759 cri.go:89] found id: ""
	I0729 14:41:54.126176 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.126189 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:54.126199 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:54.126316 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:54.162125 1039759 cri.go:89] found id: ""
	I0729 14:41:54.162157 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.162167 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:54.162174 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:54.162241 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:54.202407 1039759 cri.go:89] found id: ""
	I0729 14:41:54.202439 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.202448 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:54.202456 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:54.202522 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:54.238650 1039759 cri.go:89] found id: ""
	I0729 14:41:54.238684 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.238695 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:54.238704 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:54.238718 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:54.291200 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:54.291243 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:54.306381 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:54.306415 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:54.371355 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:54.371384 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:54.371399 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:54.455200 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:54.455237 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:56.994689 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:57.007893 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:57.007958 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:57.041775 1039759 cri.go:89] found id: ""
	I0729 14:41:57.041808 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.041820 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:57.041828 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:57.041894 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:57.075409 1039759 cri.go:89] found id: ""
	I0729 14:41:57.075442 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.075454 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:57.075462 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:57.075524 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:57.120963 1039759 cri.go:89] found id: ""
	I0729 14:41:57.121000 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.121011 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:57.121019 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:57.121088 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:57.164882 1039759 cri.go:89] found id: ""
	I0729 14:41:57.164912 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.164923 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:57.164932 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:57.165001 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:57.198511 1039759 cri.go:89] found id: ""
	I0729 14:41:57.198537 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.198545 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:57.198550 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:57.198604 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:57.238516 1039759 cri.go:89] found id: ""
	I0729 14:41:57.238544 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.238552 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:57.238559 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:57.238622 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:57.271823 1039759 cri.go:89] found id: ""
	I0729 14:41:57.271854 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.271865 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:57.271873 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:57.271937 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:57.308435 1039759 cri.go:89] found id: ""
	I0729 14:41:57.308460 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.308472 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:57.308483 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:57.308506 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:57.359783 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:57.359818 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:57.372669 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:57.372698 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:57.440979 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:57.441004 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:57.441018 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:57.520105 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:57.520139 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:57.295421 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:59.793704 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:58.673850 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:01.172547 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:58.207493 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:00.208108 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:02.208334 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:00.060542 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:00.076125 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:00.076192 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:00.113095 1039759 cri.go:89] found id: ""
	I0729 14:42:00.113129 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.113137 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:00.113150 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:00.113206 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:00.154104 1039759 cri.go:89] found id: ""
	I0729 14:42:00.154132 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.154139 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:00.154146 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:00.154202 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:00.190416 1039759 cri.go:89] found id: ""
	I0729 14:42:00.190443 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.190454 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:00.190462 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:00.190532 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:00.228138 1039759 cri.go:89] found id: ""
	I0729 14:42:00.228173 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.228185 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:00.228192 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:00.228261 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:00.265679 1039759 cri.go:89] found id: ""
	I0729 14:42:00.265706 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.265715 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:00.265721 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:00.265787 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:00.300283 1039759 cri.go:89] found id: ""
	I0729 14:42:00.300315 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.300333 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:00.300341 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:00.300433 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:00.339224 1039759 cri.go:89] found id: ""
	I0729 14:42:00.339255 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.339264 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:00.339270 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:00.339333 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:00.375780 1039759 cri.go:89] found id: ""
	I0729 14:42:00.375815 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.375826 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:00.375836 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:00.375851 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:00.425145 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:00.425190 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:00.438860 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:00.438891 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:00.512668 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:00.512695 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:00.512714 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:00.597083 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:00.597139 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:03.141962 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:03.156295 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:03.156372 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:03.192860 1039759 cri.go:89] found id: ""
	I0729 14:42:03.192891 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.192902 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:03.192911 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:03.192982 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:03.234078 1039759 cri.go:89] found id: ""
	I0729 14:42:03.234104 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.234113 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:03.234119 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:03.234171 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:03.268099 1039759 cri.go:89] found id: ""
	I0729 14:42:03.268124 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.268131 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:03.268138 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:03.268197 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:03.306470 1039759 cri.go:89] found id: ""
	I0729 14:42:03.306498 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.306507 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:03.306513 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:03.306596 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:03.341902 1039759 cri.go:89] found id: ""
	I0729 14:42:03.341933 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.341944 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:03.341952 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:03.342019 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:03.377235 1039759 cri.go:89] found id: ""
	I0729 14:42:03.377271 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.377282 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:03.377291 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:03.377355 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:03.411273 1039759 cri.go:89] found id: ""
	I0729 14:42:03.411308 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.411316 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:03.411322 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:03.411397 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:03.446482 1039759 cri.go:89] found id: ""
	I0729 14:42:03.446511 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.446519 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:03.446530 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:03.446545 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:03.460222 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:03.460262 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:03.548149 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:03.548175 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:03.548191 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:03.640563 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:03.640608 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:03.681685 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:03.681713 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:02.293412 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:04.793239 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:03.174082 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:05.674438 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:04.706798 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:06.707818 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:06.234967 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:06.249656 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:06.249726 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:06.284768 1039759 cri.go:89] found id: ""
	I0729 14:42:06.284798 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.284810 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:06.284822 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:06.284880 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:06.321109 1039759 cri.go:89] found id: ""
	I0729 14:42:06.321140 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.321150 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:06.321158 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:06.321229 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:06.357238 1039759 cri.go:89] found id: ""
	I0729 14:42:06.357269 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.357278 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:06.357284 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:06.357342 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:06.391613 1039759 cri.go:89] found id: ""
	I0729 14:42:06.391643 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.391653 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:06.391661 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:06.391726 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:06.428782 1039759 cri.go:89] found id: ""
	I0729 14:42:06.428813 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.428823 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:06.428831 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:06.428890 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:06.463558 1039759 cri.go:89] found id: ""
	I0729 14:42:06.463596 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.463607 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:06.463615 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:06.463683 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:06.500442 1039759 cri.go:89] found id: ""
	I0729 14:42:06.500474 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.500484 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:06.500501 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:06.500579 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:06.535589 1039759 cri.go:89] found id: ""
	I0729 14:42:06.535627 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.535638 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:06.535650 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:06.535668 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:06.584641 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:06.584676 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:06.597702 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:06.597737 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:06.664499 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:06.664537 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:06.664555 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:06.744808 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:06.744845 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:06.793853 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:09.294853 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:08.172993 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:10.174863 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:08.707874 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:11.209387 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:09.286151 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:09.307822 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:09.307892 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:09.369334 1039759 cri.go:89] found id: ""
	I0729 14:42:09.369363 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.369373 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:09.369381 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:09.369458 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:09.402302 1039759 cri.go:89] found id: ""
	I0729 14:42:09.402334 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.402345 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:09.402353 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:09.402423 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:09.436351 1039759 cri.go:89] found id: ""
	I0729 14:42:09.436380 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.436402 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:09.436429 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:09.436501 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:09.467735 1039759 cri.go:89] found id: ""
	I0729 14:42:09.467768 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.467780 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:09.467788 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:09.467849 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:09.503328 1039759 cri.go:89] found id: ""
	I0729 14:42:09.503355 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.503367 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:09.503376 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:09.503438 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:09.540012 1039759 cri.go:89] found id: ""
	I0729 14:42:09.540039 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.540047 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:09.540053 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:09.540106 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:09.576737 1039759 cri.go:89] found id: ""
	I0729 14:42:09.576801 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.576814 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:09.576822 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:09.576920 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:09.614624 1039759 cri.go:89] found id: ""
	I0729 14:42:09.614651 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.614659 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:09.614669 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:09.614684 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:09.650533 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:09.650580 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:09.709144 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:09.709175 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:09.724147 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:09.724173 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:09.790737 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:09.790760 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:09.790775 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:12.376968 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:12.390344 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:12.390409 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:12.424820 1039759 cri.go:89] found id: ""
	I0729 14:42:12.424849 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.424860 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:12.424876 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:12.424943 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:12.457444 1039759 cri.go:89] found id: ""
	I0729 14:42:12.457480 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.457492 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:12.457500 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:12.457561 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:12.490027 1039759 cri.go:89] found id: ""
	I0729 14:42:12.490058 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.490069 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:12.490077 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:12.490145 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:12.523229 1039759 cri.go:89] found id: ""
	I0729 14:42:12.523256 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.523265 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:12.523270 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:12.523321 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:12.557849 1039759 cri.go:89] found id: ""
	I0729 14:42:12.557875 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.557885 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:12.557891 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:12.557951 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:12.592943 1039759 cri.go:89] found id: ""
	I0729 14:42:12.592973 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.592982 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:12.592989 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:12.593059 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:12.626495 1039759 cri.go:89] found id: ""
	I0729 14:42:12.626531 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.626539 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:12.626557 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:12.626641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:12.663764 1039759 cri.go:89] found id: ""
	I0729 14:42:12.663793 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.663805 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:12.663818 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:12.663835 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:12.722521 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:12.722556 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:12.736476 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:12.736505 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:12.809582 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:12.809617 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:12.809637 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:12.890665 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:12.890712 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:11.793144 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:13.793447 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:15.794630 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:12.673257 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:15.173702 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:13.707929 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:15.707964 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:15.429702 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:15.443258 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:15.443340 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:15.477170 1039759 cri.go:89] found id: ""
	I0729 14:42:15.477198 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.477207 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:15.477212 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:15.477266 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:15.511614 1039759 cri.go:89] found id: ""
	I0729 14:42:15.511652 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.511665 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:15.511671 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:15.511739 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:15.548472 1039759 cri.go:89] found id: ""
	I0729 14:42:15.548501 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.548511 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:15.548519 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:15.548590 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:15.589060 1039759 cri.go:89] found id: ""
	I0729 14:42:15.589090 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.589102 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:15.589110 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:15.589185 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:15.622846 1039759 cri.go:89] found id: ""
	I0729 14:42:15.622873 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.622882 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:15.622887 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:15.622943 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:15.656193 1039759 cri.go:89] found id: ""
	I0729 14:42:15.656220 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.656229 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:15.656237 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:15.656307 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:15.691301 1039759 cri.go:89] found id: ""
	I0729 14:42:15.691336 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.691348 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:15.691357 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:15.691420 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:15.729923 1039759 cri.go:89] found id: ""
	I0729 14:42:15.729963 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.729974 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:15.729988 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:15.730004 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:15.783531 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:15.783569 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:15.799590 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:15.799619 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:15.874849 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:15.874886 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:15.874901 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:15.957384 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:15.957424 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:18.497035 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:18.511538 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:18.511616 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:18.550512 1039759 cri.go:89] found id: ""
	I0729 14:42:18.550552 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.550573 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:18.550582 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:18.550642 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:18.585910 1039759 cri.go:89] found id: ""
	I0729 14:42:18.585942 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.585954 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:18.585962 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:18.586031 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:18.619680 1039759 cri.go:89] found id: ""
	I0729 14:42:18.619712 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.619722 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:18.619730 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:18.619799 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:18.651559 1039759 cri.go:89] found id: ""
	I0729 14:42:18.651592 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.651604 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:18.651613 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:18.651688 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:18.686668 1039759 cri.go:89] found id: ""
	I0729 14:42:18.686693 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.686701 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:18.686711 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:18.686764 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:18.722832 1039759 cri.go:89] found id: ""
	I0729 14:42:18.722859 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.722869 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:18.722876 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:18.722927 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:18.758261 1039759 cri.go:89] found id: ""
	I0729 14:42:18.758289 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.758302 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:18.758310 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:18.758378 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:18.795190 1039759 cri.go:89] found id: ""
	I0729 14:42:18.795216 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.795227 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:18.795237 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:18.795251 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:18.835331 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:18.835366 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:17.796916 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:20.294082 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:17.673000 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:19.674010 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:18.209178 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:20.707421 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:18.889707 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:18.889745 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:18.902477 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:18.902503 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:18.970712 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:18.970735 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:18.970748 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:21.552092 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:21.566581 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:21.566669 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:21.600230 1039759 cri.go:89] found id: ""
	I0729 14:42:21.600261 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.600275 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:21.600283 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:21.600346 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:21.636576 1039759 cri.go:89] found id: ""
	I0729 14:42:21.636616 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.636627 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:21.636635 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:21.636705 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:21.672944 1039759 cri.go:89] found id: ""
	I0729 14:42:21.672973 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.672984 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:21.672997 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:21.673063 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:21.708555 1039759 cri.go:89] found id: ""
	I0729 14:42:21.708582 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.708601 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:21.708613 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:21.708673 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:21.744862 1039759 cri.go:89] found id: ""
	I0729 14:42:21.744891 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.744902 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:21.744908 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:21.744973 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:21.779084 1039759 cri.go:89] found id: ""
	I0729 14:42:21.779111 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.779119 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:21.779126 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:21.779183 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:21.819931 1039759 cri.go:89] found id: ""
	I0729 14:42:21.819972 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.819981 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:21.819989 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:21.820047 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:21.855472 1039759 cri.go:89] found id: ""
	I0729 14:42:21.855500 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.855509 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:21.855522 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:21.855539 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:21.925561 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:21.925579 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:21.925596 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:22.015986 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:22.016032 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:22.059898 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:22.059935 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:22.129018 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:22.129055 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:21.787886 1039263 pod_ready.go:81] duration metric: took 4m0.000465481s for pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace to be "Ready" ...
	E0729 14:42:21.787929 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0729 14:42:21.787945 1039263 pod_ready.go:38] duration metric: took 4m5.237036546s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:42:21.787973 1039263 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:42:21.788025 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:21.788089 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:21.857594 1039263 cri.go:89] found id: "0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:21.857613 1039263 cri.go:89] found id: ""
	I0729 14:42:21.857620 1039263 logs.go:276] 1 containers: [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8]
	I0729 14:42:21.857674 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:21.862462 1039263 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:21.862523 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:21.903562 1039263 cri.go:89] found id: "759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:21.903594 1039263 cri.go:89] found id: ""
	I0729 14:42:21.903604 1039263 logs.go:276] 1 containers: [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1]
	I0729 14:42:21.903660 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:21.908232 1039263 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:21.908327 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:21.947632 1039263 cri.go:89] found id: "cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:21.947663 1039263 cri.go:89] found id: ""
	I0729 14:42:21.947674 1039263 logs.go:276] 1 containers: [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d]
	I0729 14:42:21.947737 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:21.952576 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:21.952649 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:21.995318 1039263 cri.go:89] found id: "ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:21.995343 1039263 cri.go:89] found id: ""
	I0729 14:42:21.995351 1039263 logs.go:276] 1 containers: [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40]
	I0729 14:42:21.995418 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.000352 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:22.000440 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:22.040544 1039263 cri.go:89] found id: "1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:22.040572 1039263 cri.go:89] found id: ""
	I0729 14:42:22.040582 1039263 logs.go:276] 1 containers: [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b]
	I0729 14:42:22.040648 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.044840 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:22.044910 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:22.090787 1039263 cri.go:89] found id: "d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:22.090816 1039263 cri.go:89] found id: ""
	I0729 14:42:22.090827 1039263 logs.go:276] 1 containers: [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322]
	I0729 14:42:22.090897 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.096748 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:22.096826 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:22.143491 1039263 cri.go:89] found id: ""
	I0729 14:42:22.143522 1039263 logs.go:276] 0 containers: []
	W0729 14:42:22.143534 1039263 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:22.143541 1039263 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 14:42:22.143609 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 14:42:22.179378 1039263 cri.go:89] found id: "bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:22.179404 1039263 cri.go:89] found id: "40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:22.179409 1039263 cri.go:89] found id: ""
	I0729 14:42:22.179419 1039263 logs.go:276] 2 containers: [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4]
	I0729 14:42:22.179482 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.184686 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.189009 1039263 logs.go:123] Gathering logs for etcd [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1] ...
	I0729 14:42:22.189029 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:22.250475 1039263 logs.go:123] Gathering logs for kube-scheduler [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40] ...
	I0729 14:42:22.250510 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:22.286581 1039263 logs.go:123] Gathering logs for kube-proxy [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b] ...
	I0729 14:42:22.286622 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:22.325541 1039263 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:22.325570 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:22.831822 1039263 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:22.831875 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:22.846540 1039263 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:22.846588 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 14:42:22.970758 1039263 logs.go:123] Gathering logs for coredns [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d] ...
	I0729 14:42:22.970796 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:23.013428 1039263 logs.go:123] Gathering logs for kube-controller-manager [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322] ...
	I0729 14:42:23.013467 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:23.064784 1039263 logs.go:123] Gathering logs for storage-provisioner [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a] ...
	I0729 14:42:23.064820 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:23.111615 1039263 logs.go:123] Gathering logs for storage-provisioner [40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4] ...
	I0729 14:42:23.111653 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:23.151296 1039263 logs.go:123] Gathering logs for container status ...
	I0729 14:42:23.151328 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:23.198650 1039263 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:23.198692 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:23.259196 1039263 logs.go:123] Gathering logs for kube-apiserver [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8] ...
	I0729 14:42:23.259247 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:25.808980 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:25.829180 1039263 api_server.go:72] duration metric: took 4m16.997740137s to wait for apiserver process to appear ...
	I0729 14:42:25.829211 1039263 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:42:25.829260 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:25.829335 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:25.875138 1039263 cri.go:89] found id: "0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:25.875167 1039263 cri.go:89] found id: ""
	I0729 14:42:25.875175 1039263 logs.go:276] 1 containers: [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8]
	I0729 14:42:25.875230 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:25.879855 1039263 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:25.879937 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:25.916938 1039263 cri.go:89] found id: "759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:25.916964 1039263 cri.go:89] found id: ""
	I0729 14:42:25.916974 1039263 logs.go:276] 1 containers: [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1]
	I0729 14:42:25.917036 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:25.921166 1039263 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:25.921224 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:25.958196 1039263 cri.go:89] found id: "cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:25.958224 1039263 cri.go:89] found id: ""
	I0729 14:42:25.958234 1039263 logs.go:276] 1 containers: [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d]
	I0729 14:42:25.958300 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:25.962697 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:25.962760 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:26.000162 1039263 cri.go:89] found id: "ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:26.000195 1039263 cri.go:89] found id: ""
	I0729 14:42:26.000206 1039263 logs.go:276] 1 containers: [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40]
	I0729 14:42:26.000277 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.004518 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:26.004594 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:26.041099 1039263 cri.go:89] found id: "1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:26.041133 1039263 cri.go:89] found id: ""
	I0729 14:42:26.041144 1039263 logs.go:276] 1 containers: [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b]
	I0729 14:42:26.041208 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.045334 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:26.045412 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:26.082783 1039263 cri.go:89] found id: "d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:26.082815 1039263 cri.go:89] found id: ""
	I0729 14:42:26.082826 1039263 logs.go:276] 1 containers: [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322]
	I0729 14:42:26.082901 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.086996 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:26.087063 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:26.123636 1039263 cri.go:89] found id: ""
	I0729 14:42:26.123677 1039263 logs.go:276] 0 containers: []
	W0729 14:42:26.123688 1039263 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:26.123694 1039263 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 14:42:26.123756 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 14:42:26.163819 1039263 cri.go:89] found id: "bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:26.163849 1039263 cri.go:89] found id: "40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:26.163855 1039263 cri.go:89] found id: ""
	I0729 14:42:26.163864 1039263 logs.go:276] 2 containers: [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4]
	I0729 14:42:26.163929 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.168611 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.173125 1039263 logs.go:123] Gathering logs for kube-scheduler [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40] ...
	I0729 14:42:26.173155 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:22.173593 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:24.173621 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:22.708101 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:25.206661 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:27.207926 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:24.645474 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:24.658107 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:24.658171 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:24.696604 1039759 cri.go:89] found id: ""
	I0729 14:42:24.696635 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.696645 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:24.696653 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:24.696725 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:24.733862 1039759 cri.go:89] found id: ""
	I0729 14:42:24.733887 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.733894 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:24.733901 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:24.733957 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:24.770614 1039759 cri.go:89] found id: ""
	I0729 14:42:24.770644 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.770656 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:24.770664 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:24.770734 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:24.806368 1039759 cri.go:89] found id: ""
	I0729 14:42:24.806394 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.806403 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:24.806408 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:24.806470 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:24.838490 1039759 cri.go:89] found id: ""
	I0729 14:42:24.838526 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.838534 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:24.838541 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:24.838596 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:24.871017 1039759 cri.go:89] found id: ""
	I0729 14:42:24.871043 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.871051 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:24.871057 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:24.871128 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:24.903281 1039759 cri.go:89] found id: ""
	I0729 14:42:24.903311 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.903322 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:24.903330 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:24.903403 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:24.937245 1039759 cri.go:89] found id: ""
	I0729 14:42:24.937279 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.937291 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:24.937304 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:24.937319 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:24.989518 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:24.989551 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:25.005021 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:25.005055 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:25.080849 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:25.080877 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:25.080893 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:25.163742 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:25.163784 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:27.706182 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:27.719350 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:27.719425 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:27.756955 1039759 cri.go:89] found id: ""
	I0729 14:42:27.756982 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.756990 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:27.756997 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:27.757054 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:27.791975 1039759 cri.go:89] found id: ""
	I0729 14:42:27.792014 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.792025 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:27.792033 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:27.792095 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:27.834188 1039759 cri.go:89] found id: ""
	I0729 14:42:27.834215 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.834223 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:27.834230 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:27.834296 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:27.867798 1039759 cri.go:89] found id: ""
	I0729 14:42:27.867834 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.867843 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:27.867851 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:27.867918 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:27.900316 1039759 cri.go:89] found id: ""
	I0729 14:42:27.900343 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.900351 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:27.900357 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:27.900422 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:27.932361 1039759 cri.go:89] found id: ""
	I0729 14:42:27.932391 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.932402 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:27.932425 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:27.932493 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:27.965530 1039759 cri.go:89] found id: ""
	I0729 14:42:27.965562 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.965573 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:27.965581 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:27.965651 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:27.999582 1039759 cri.go:89] found id: ""
	I0729 14:42:27.999608 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.999617 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:27.999626 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:27.999654 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:28.069415 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:28.069438 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:28.069454 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:28.149781 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:28.149821 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:28.190045 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:28.190072 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:28.244147 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:28.244188 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:26.217755 1039263 logs.go:123] Gathering logs for storage-provisioner [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a] ...
	I0729 14:42:26.217796 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:26.257363 1039263 logs.go:123] Gathering logs for storage-provisioner [40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4] ...
	I0729 14:42:26.257399 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:26.297502 1039263 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:26.297534 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:26.729336 1039263 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:26.729370 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:26.779172 1039263 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:26.779213 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:26.794369 1039263 logs.go:123] Gathering logs for etcd [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1] ...
	I0729 14:42:26.794399 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:26.857964 1039263 logs.go:123] Gathering logs for coredns [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d] ...
	I0729 14:42:26.858000 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:26.895052 1039263 logs.go:123] Gathering logs for container status ...
	I0729 14:42:26.895083 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:26.936360 1039263 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:26.936395 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 14:42:27.037118 1039263 logs.go:123] Gathering logs for kube-apiserver [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8] ...
	I0729 14:42:27.037160 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:27.089764 1039263 logs.go:123] Gathering logs for kube-proxy [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b] ...
	I0729 14:42:27.089798 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:27.134009 1039263 logs.go:123] Gathering logs for kube-controller-manager [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322] ...
	I0729 14:42:27.134042 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:29.690960 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:42:29.696457 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 200:
	ok
	I0729 14:42:29.697313 1039263 api_server.go:141] control plane version: v1.30.3
	I0729 14:42:29.697335 1039263 api_server.go:131] duration metric: took 3.868117139s to wait for apiserver health ...
	I0729 14:42:29.697343 1039263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:42:29.697370 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:29.697430 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:29.740594 1039263 cri.go:89] found id: "0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:29.740623 1039263 cri.go:89] found id: ""
	I0729 14:42:29.740633 1039263 logs.go:276] 1 containers: [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8]
	I0729 14:42:29.740696 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.745183 1039263 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:29.745257 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:29.780091 1039263 cri.go:89] found id: "759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:29.780112 1039263 cri.go:89] found id: ""
	I0729 14:42:29.780119 1039263 logs.go:276] 1 containers: [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1]
	I0729 14:42:29.780178 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.784241 1039263 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:29.784305 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:29.825618 1039263 cri.go:89] found id: "cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:29.825641 1039263 cri.go:89] found id: ""
	I0729 14:42:29.825649 1039263 logs.go:276] 1 containers: [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d]
	I0729 14:42:29.825715 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.830291 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:29.830351 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:29.866651 1039263 cri.go:89] found id: "ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:29.866685 1039263 cri.go:89] found id: ""
	I0729 14:42:29.866695 1039263 logs.go:276] 1 containers: [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40]
	I0729 14:42:29.866758 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.871440 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:29.871494 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:29.911944 1039263 cri.go:89] found id: "1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:29.911968 1039263 cri.go:89] found id: ""
	I0729 14:42:29.911976 1039263 logs.go:276] 1 containers: [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b]
	I0729 14:42:29.912037 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.916604 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:29.916680 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:29.954334 1039263 cri.go:89] found id: "d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:29.954361 1039263 cri.go:89] found id: ""
	I0729 14:42:29.954371 1039263 logs.go:276] 1 containers: [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322]
	I0729 14:42:29.954446 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.959051 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:29.959130 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:29.996760 1039263 cri.go:89] found id: ""
	I0729 14:42:29.996795 1039263 logs.go:276] 0 containers: []
	W0729 14:42:29.996804 1039263 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:29.996812 1039263 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 14:42:29.996883 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 14:42:30.034562 1039263 cri.go:89] found id: "bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:30.034598 1039263 cri.go:89] found id: "40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:30.034604 1039263 cri.go:89] found id: ""
	I0729 14:42:30.034614 1039263 logs.go:276] 2 containers: [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4]
	I0729 14:42:30.034682 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:30.039588 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:30.043866 1039263 logs.go:123] Gathering logs for kube-apiserver [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8] ...
	I0729 14:42:30.043889 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:30.091309 1039263 logs.go:123] Gathering logs for etcd [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1] ...
	I0729 14:42:30.091349 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:30.149888 1039263 logs.go:123] Gathering logs for kube-scheduler [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40] ...
	I0729 14:42:30.149926 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:30.189441 1039263 logs.go:123] Gathering logs for kube-controller-manager [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322] ...
	I0729 14:42:30.189479 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:30.250850 1039263 logs.go:123] Gathering logs for storage-provisioner [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a] ...
	I0729 14:42:30.250890 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:30.290077 1039263 logs.go:123] Gathering logs for storage-provisioner [40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4] ...
	I0729 14:42:30.290111 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:30.329035 1039263 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:30.329068 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:30.383068 1039263 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:30.383113 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 14:42:30.497009 1039263 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:30.497045 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:30.914489 1039263 logs.go:123] Gathering logs for container status ...
	I0729 14:42:30.914534 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:30.972901 1039263 logs.go:123] Gathering logs for kube-proxy [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b] ...
	I0729 14:42:30.972951 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:31.021798 1039263 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:31.021838 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:31.040147 1039263 logs.go:123] Gathering logs for coredns [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d] ...
	I0729 14:42:31.040182 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:26.674294 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:29.173375 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:31.173588 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:29.710051 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:32.209382 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:33.593681 1039263 system_pods.go:59] 8 kube-system pods found
	I0729 14:42:33.593711 1039263 system_pods.go:61] "coredns-7db6d8ff4d-6dhzz" [c680e565-fe93-4072-8fe8-6fd440ae5675] Running
	I0729 14:42:33.593716 1039263 system_pods.go:61] "etcd-embed-certs-668123" [3244d6a8-3aa2-406a-86fe-9770f5b8541a] Running
	I0729 14:42:33.593719 1039263 system_pods.go:61] "kube-apiserver-embed-certs-668123" [a00570e4-b496-4083-b280-4125643e475e] Running
	I0729 14:42:33.593723 1039263 system_pods.go:61] "kube-controller-manager-embed-certs-668123" [cec685e1-4d5f-4210-a115-e3766c962f07] Running
	I0729 14:42:33.593725 1039263 system_pods.go:61] "kube-proxy-2v79q" [e43e850d-b94e-467c-bf0f-0eac3828f54f] Running
	I0729 14:42:33.593728 1039263 system_pods.go:61] "kube-scheduler-embed-certs-668123" [4037d948-faed-49c9-b321-6a4be51b9ea9] Running
	I0729 14:42:33.593733 1039263 system_pods.go:61] "metrics-server-569cc877fc-5msnp" [eb9cd6f7-caf5-4b18-b0d6-0f01add839ce] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:42:33.593736 1039263 system_pods.go:61] "storage-provisioner" [ecdab0df-406c-4f3c-b8fe-34a48b7f1e0a] Running
	I0729 14:42:33.593744 1039263 system_pods.go:74] duration metric: took 3.896394577s to wait for pod list to return data ...
	I0729 14:42:33.593751 1039263 default_sa.go:34] waiting for default service account to be created ...
	I0729 14:42:33.596176 1039263 default_sa.go:45] found service account: "default"
	I0729 14:42:33.596197 1039263 default_sa.go:55] duration metric: took 2.440561ms for default service account to be created ...
	I0729 14:42:33.596205 1039263 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 14:42:33.601830 1039263 system_pods.go:86] 8 kube-system pods found
	I0729 14:42:33.601855 1039263 system_pods.go:89] "coredns-7db6d8ff4d-6dhzz" [c680e565-fe93-4072-8fe8-6fd440ae5675] Running
	I0729 14:42:33.601861 1039263 system_pods.go:89] "etcd-embed-certs-668123" [3244d6a8-3aa2-406a-86fe-9770f5b8541a] Running
	I0729 14:42:33.601866 1039263 system_pods.go:89] "kube-apiserver-embed-certs-668123" [a00570e4-b496-4083-b280-4125643e475e] Running
	I0729 14:42:33.601871 1039263 system_pods.go:89] "kube-controller-manager-embed-certs-668123" [cec685e1-4d5f-4210-a115-e3766c962f07] Running
	I0729 14:42:33.601878 1039263 system_pods.go:89] "kube-proxy-2v79q" [e43e850d-b94e-467c-bf0f-0eac3828f54f] Running
	I0729 14:42:33.601887 1039263 system_pods.go:89] "kube-scheduler-embed-certs-668123" [4037d948-faed-49c9-b321-6a4be51b9ea9] Running
	I0729 14:42:33.601897 1039263 system_pods.go:89] "metrics-server-569cc877fc-5msnp" [eb9cd6f7-caf5-4b18-b0d6-0f01add839ce] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:42:33.601908 1039263 system_pods.go:89] "storage-provisioner" [ecdab0df-406c-4f3c-b8fe-34a48b7f1e0a] Running
	I0729 14:42:33.601921 1039263 system_pods.go:126] duration metric: took 5.70985ms to wait for k8s-apps to be running ...
	I0729 14:42:33.601934 1039263 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 14:42:33.601994 1039263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:42:33.620869 1039263 system_svc.go:56] duration metric: took 18.921974ms WaitForService to wait for kubelet
	I0729 14:42:33.620907 1039263 kubeadm.go:582] duration metric: took 4m24.7894747s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:42:33.620939 1039263 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:42:33.623517 1039263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:42:33.623538 1039263 node_conditions.go:123] node cpu capacity is 2
	I0729 14:42:33.623562 1039263 node_conditions.go:105] duration metric: took 2.617272ms to run NodePressure ...
	I0729 14:42:33.623582 1039263 start.go:241] waiting for startup goroutines ...
	I0729 14:42:33.623591 1039263 start.go:246] waiting for cluster config update ...
	I0729 14:42:33.623601 1039263 start.go:255] writing updated cluster config ...
	I0729 14:42:33.623897 1039263 ssh_runner.go:195] Run: rm -f paused
	I0729 14:42:33.677961 1039263 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 14:42:33.679952 1039263 out.go:177] * Done! kubectl is now configured to use "embed-certs-668123" cluster and "default" namespace by default
	I0729 14:42:30.758335 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:30.771788 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:30.771860 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:30.807608 1039759 cri.go:89] found id: ""
	I0729 14:42:30.807633 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.807641 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:30.807647 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:30.807709 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:30.842361 1039759 cri.go:89] found id: ""
	I0729 14:42:30.842389 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.842397 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:30.842404 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:30.842474 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:30.879123 1039759 cri.go:89] found id: ""
	I0729 14:42:30.879149 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.879157 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:30.879162 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:30.879228 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:30.913042 1039759 cri.go:89] found id: ""
	I0729 14:42:30.913072 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.913084 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:30.913092 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:30.913162 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:30.949867 1039759 cri.go:89] found id: ""
	I0729 14:42:30.949900 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.949910 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:30.949919 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:30.949988 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:30.997468 1039759 cri.go:89] found id: ""
	I0729 14:42:30.997497 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.997509 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:30.997516 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:30.997606 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:31.039611 1039759 cri.go:89] found id: ""
	I0729 14:42:31.039643 1039759 logs.go:276] 0 containers: []
	W0729 14:42:31.039654 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:31.039662 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:31.039730 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:31.085802 1039759 cri.go:89] found id: ""
	I0729 14:42:31.085839 1039759 logs.go:276] 0 containers: []
	W0729 14:42:31.085851 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:31.085862 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:31.085890 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:31.155919 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:31.155941 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:31.155954 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:31.232795 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:31.232833 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:31.270647 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:31.270682 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:31.324648 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:31.324685 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:33.839801 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:33.853358 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:33.853417 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:33.674345 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:36.174468 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:34.707752 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:37.209918 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:33.889294 1039759 cri.go:89] found id: ""
	I0729 14:42:33.889323 1039759 logs.go:276] 0 containers: []
	W0729 14:42:33.889334 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:33.889342 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:33.889413 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:33.930106 1039759 cri.go:89] found id: ""
	I0729 14:42:33.930130 1039759 logs.go:276] 0 containers: []
	W0729 14:42:33.930142 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:33.930149 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:33.930211 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:33.973607 1039759 cri.go:89] found id: ""
	I0729 14:42:33.973634 1039759 logs.go:276] 0 containers: []
	W0729 14:42:33.973646 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:33.973654 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:33.973715 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:34.010103 1039759 cri.go:89] found id: ""
	I0729 14:42:34.010133 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.010142 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:34.010149 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:34.010209 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:34.044050 1039759 cri.go:89] found id: ""
	I0729 14:42:34.044080 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.044092 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:34.044099 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:34.044174 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:34.081222 1039759 cri.go:89] found id: ""
	I0729 14:42:34.081250 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.081260 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:34.081268 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:34.081360 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:34.115837 1039759 cri.go:89] found id: ""
	I0729 14:42:34.115878 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.115891 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:34.115899 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:34.115973 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:34.151086 1039759 cri.go:89] found id: ""
	I0729 14:42:34.151116 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.151126 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:34.151139 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:34.151156 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:34.164058 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:34.164087 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:34.238481 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:34.238503 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:34.238518 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:34.316236 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:34.316279 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:34.356281 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:34.356316 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:36.910374 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:36.924907 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:36.925008 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:36.960508 1039759 cri.go:89] found id: ""
	I0729 14:42:36.960535 1039759 logs.go:276] 0 containers: []
	W0729 14:42:36.960543 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:36.960550 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:36.960631 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:36.999840 1039759 cri.go:89] found id: ""
	I0729 14:42:36.999869 1039759 logs.go:276] 0 containers: []
	W0729 14:42:36.999881 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:36.999889 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:36.999960 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:37.032801 1039759 cri.go:89] found id: ""
	I0729 14:42:37.032832 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.032840 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:37.032847 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:37.032907 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:37.066359 1039759 cri.go:89] found id: ""
	I0729 14:42:37.066386 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.066394 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:37.066401 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:37.066454 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:37.103816 1039759 cri.go:89] found id: ""
	I0729 14:42:37.103844 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.103852 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:37.103859 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:37.103922 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:37.137135 1039759 cri.go:89] found id: ""
	I0729 14:42:37.137175 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.137186 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:37.137194 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:37.137267 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:37.170819 1039759 cri.go:89] found id: ""
	I0729 14:42:37.170851 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.170863 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:37.170871 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:37.170941 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:37.206427 1039759 cri.go:89] found id: ""
	I0729 14:42:37.206456 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.206467 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:37.206478 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:37.206492 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:37.287119 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:37.287160 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:37.331090 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:37.331119 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:37.392147 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:37.392189 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:37.406017 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:37.406047 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:37.471644 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:38.673603 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:40.674214 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:39.706915 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:41.201453 1039440 pod_ready.go:81] duration metric: took 4m0.000454399s for pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace to be "Ready" ...
	E0729 14:42:41.201488 1039440 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 14:42:41.201514 1039440 pod_ready.go:38] duration metric: took 4m13.052610312s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:42:41.201553 1039440 kubeadm.go:597] duration metric: took 4m22.712976139s to restartPrimaryControlPlane
	W0729 14:42:41.201639 1039440 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 14:42:41.201696 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:42:39.972835 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:39.985878 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:39.985945 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:40.020312 1039759 cri.go:89] found id: ""
	I0729 14:42:40.020349 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.020360 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:40.020368 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:40.020456 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:40.055688 1039759 cri.go:89] found id: ""
	I0729 14:42:40.055721 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.055732 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:40.055740 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:40.055799 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:40.090432 1039759 cri.go:89] found id: ""
	I0729 14:42:40.090463 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.090472 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:40.090478 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:40.090549 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:40.127794 1039759 cri.go:89] found id: ""
	I0729 14:42:40.127823 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.127832 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:40.127838 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:40.127894 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:40.162911 1039759 cri.go:89] found id: ""
	I0729 14:42:40.162944 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.162953 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:40.162959 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:40.163020 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:40.201578 1039759 cri.go:89] found id: ""
	I0729 14:42:40.201608 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.201619 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:40.201625 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:40.201684 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:40.247314 1039759 cri.go:89] found id: ""
	I0729 14:42:40.247340 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.247348 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:40.247363 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:40.247436 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:40.285393 1039759 cri.go:89] found id: ""
	I0729 14:42:40.285422 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.285431 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:40.285440 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:40.285458 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:40.299901 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:40.299933 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:40.372774 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:40.372802 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:40.372821 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:40.454392 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:40.454447 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:40.494641 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:40.494671 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:43.046060 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:43.058790 1039759 kubeadm.go:597] duration metric: took 4m3.37086398s to restartPrimaryControlPlane
	W0729 14:42:43.058888 1039759 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 14:42:43.058920 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:42:43.544647 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:42:43.560304 1039759 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:42:43.570229 1039759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:42:43.579922 1039759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:42:43.579946 1039759 kubeadm.go:157] found existing configuration files:
	
	I0729 14:42:43.580004 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:42:43.589520 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:42:43.589591 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:42:43.600286 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:42:43.611565 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:42:43.611629 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:42:43.623432 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:42:43.633289 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:42:43.633338 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:42:43.643410 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:42:43.653723 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:42:43.653816 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:42:43.663840 1039759 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:42:43.735243 1039759 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 14:42:43.735314 1039759 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:42:43.904148 1039759 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:42:43.904310 1039759 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:42:43.904480 1039759 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 14:42:44.101401 1039759 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:42:44.103392 1039759 out.go:204]   - Generating certificates and keys ...
	I0729 14:42:44.103499 1039759 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:42:44.103580 1039759 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:42:44.103693 1039759 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:42:44.103829 1039759 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:42:44.103944 1039759 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:42:44.104054 1039759 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:42:44.104146 1039759 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:42:44.104360 1039759 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:42:44.104599 1039759 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:42:44.105264 1039759 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:42:44.105363 1039759 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:42:44.105461 1039759 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:42:44.426107 1039759 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:42:44.593004 1039759 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:42:44.845387 1039759 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:42:44.934634 1039759 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:42:44.959808 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:42:44.961918 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:42:44.961990 1039759 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:42:45.117986 1039759 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:42:42.678218 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:45.175453 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:45.119775 1039759 out.go:204]   - Booting up control plane ...
	I0729 14:42:45.119913 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:42:45.121333 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:42:45.123001 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:42:45.123783 1039759 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:42:45.126031 1039759 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 14:42:47.673678 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:49.674212 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:52.173086 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:54.173797 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:56.178948 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:58.674432 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:00.675207 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:03.173621 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:05.175460 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:07.674421 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:09.674478 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:12.882329 1039440 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.680602745s)
	I0729 14:43:12.882426 1039440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:43:12.900267 1039440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:43:12.910750 1039440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:43:12.921172 1039440 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:43:12.921194 1039440 kubeadm.go:157] found existing configuration files:
	
	I0729 14:43:12.921244 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 14:43:12.931186 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:43:12.931243 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:43:12.940800 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 14:43:12.949875 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:43:12.949929 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:43:12.959555 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 14:43:12.968817 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:43:12.968871 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:43:12.978560 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 14:43:12.987657 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:43:12.987700 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:43:12.997142 1039440 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:43:13.057245 1039440 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 14:43:13.057405 1039440 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:43:13.205227 1039440 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:43:13.205381 1039440 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:43:13.205541 1039440 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 14:43:13.404885 1039440 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:43:13.407054 1039440 out.go:204]   - Generating certificates and keys ...
	I0729 14:43:13.407148 1039440 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:43:13.407232 1039440 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:43:13.407329 1039440 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:43:13.407411 1039440 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:43:13.407509 1039440 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:43:13.407598 1039440 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:43:13.407688 1039440 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:43:13.407774 1039440 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:43:13.407889 1039440 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:43:13.408006 1039440 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:43:13.408071 1039440 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:43:13.408177 1039440 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:43:13.563569 1039440 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:43:14.001138 1039440 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 14:43:14.091368 1039440 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:43:14.238732 1039440 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:43:14.344460 1039440 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:43:14.346386 1039440 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:43:14.349309 1039440 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:43:12.174022 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:14.673166 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:14.351183 1039440 out.go:204]   - Booting up control plane ...
	I0729 14:43:14.351293 1039440 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:43:14.351374 1039440 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:43:14.351671 1039440 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:43:14.375878 1039440 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:43:14.377114 1039440 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:43:14.377198 1039440 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:43:14.528561 1039440 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 14:43:14.528665 1039440 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 14:43:15.030447 1039440 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.044001ms
	I0729 14:43:15.030591 1039440 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 14:43:20.033357 1039440 kubeadm.go:310] [api-check] The API server is healthy after 5.002708747s
	I0729 14:43:20.055871 1039440 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 14:43:20.069020 1039440 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 14:43:20.108465 1039440 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 14:43:20.108664 1039440 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-751306 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 14:43:20.124596 1039440 kubeadm.go:310] [bootstrap-token] Using token: vqqt7g.hayxn6bly3sjo08s
	I0729 14:43:20.125995 1039440 out.go:204]   - Configuring RBAC rules ...
	I0729 14:43:20.126124 1039440 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 14:43:20.138826 1039440 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 14:43:20.145976 1039440 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 14:43:20.149166 1039440 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 14:43:20.152875 1039440 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 14:43:20.156268 1039440 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 14:43:20.446117 1039440 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 14:43:20.900251 1039440 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 14:43:21.446105 1039440 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 14:43:21.446920 1039440 kubeadm.go:310] 
	I0729 14:43:21.446984 1039440 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 14:43:21.446992 1039440 kubeadm.go:310] 
	I0729 14:43:21.447057 1039440 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 14:43:21.447063 1039440 kubeadm.go:310] 
	I0729 14:43:21.447084 1039440 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 14:43:21.447133 1039440 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 14:43:21.447176 1039440 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 14:43:21.447182 1039440 kubeadm.go:310] 
	I0729 14:43:21.447233 1039440 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 14:43:21.447242 1039440 kubeadm.go:310] 
	I0729 14:43:21.447310 1039440 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 14:43:21.447334 1039440 kubeadm.go:310] 
	I0729 14:43:21.447408 1039440 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 14:43:21.447515 1039440 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 14:43:21.447574 1039440 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 14:43:21.447582 1039440 kubeadm.go:310] 
	I0729 14:43:21.447652 1039440 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 14:43:21.447722 1039440 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 14:43:21.447728 1039440 kubeadm.go:310] 
	I0729 14:43:21.447799 1039440 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token vqqt7g.hayxn6bly3sjo08s \
	I0729 14:43:21.447903 1039440 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 \
	I0729 14:43:21.447931 1039440 kubeadm.go:310] 	--control-plane 
	I0729 14:43:21.447935 1039440 kubeadm.go:310] 
	I0729 14:43:21.448017 1039440 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 14:43:21.448025 1039440 kubeadm.go:310] 
	I0729 14:43:21.448115 1039440 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token vqqt7g.hayxn6bly3sjo08s \
	I0729 14:43:21.448239 1039440 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 
	I0729 14:43:21.449071 1039440 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:43:21.449117 1039440 cni.go:84] Creating CNI manager for ""
	I0729 14:43:21.449134 1039440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:43:21.450744 1039440 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:43:16.674887 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:19.175478 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:21.452012 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:43:21.464232 1039440 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:43:21.486786 1039440 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 14:43:21.486890 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:21.486887 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-751306 minikube.k8s.io/updated_at=2024_07_29T14_43_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411 minikube.k8s.io/name=default-k8s-diff-port-751306 minikube.k8s.io/primary=true
	I0729 14:43:21.689413 1039440 ops.go:34] apiserver oom_adj: -16
	I0729 14:43:21.697342 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:22.198351 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:21.673361 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:23.674189 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:26.173782 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:22.698043 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:23.198259 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:23.697640 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:24.198325 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:24.697702 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:25.198216 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:25.697625 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:26.197978 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:26.698039 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:27.197794 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:25.126835 1039759 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 14:43:25.127033 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:43:25.127306 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:43:28.174036 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:29.667306 1038758 pod_ready.go:81] duration metric: took 4m0.000473541s for pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace to be "Ready" ...
	E0729 14:43:29.667341 1038758 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 14:43:29.667369 1038758 pod_ready.go:38] duration metric: took 4m13.916299366s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:43:29.667407 1038758 kubeadm.go:597] duration metric: took 4m21.57875039s to restartPrimaryControlPlane
	W0729 14:43:29.667481 1038758 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 14:43:29.667513 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:43:27.698036 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:28.197941 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:28.697839 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:29.197525 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:29.698141 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:30.197670 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:30.697615 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:31.197999 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:31.697648 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:32.197647 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:30.127504 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:43:30.127777 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:43:32.697837 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:33.197692 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:33.697431 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:34.198048 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:34.698439 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:34.802320 1039440 kubeadm.go:1113] duration metric: took 13.31552277s to wait for elevateKubeSystemPrivileges
	I0729 14:43:34.802367 1039440 kubeadm.go:394] duration metric: took 5m16.369033556s to StartCluster
	I0729 14:43:34.802391 1039440 settings.go:142] acquiring lock: {Name:mke61e73d7bb1a5bd9c2f4c9e9bba0a07b199ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:43:34.802488 1039440 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:43:34.804740 1039440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:43:34.805049 1039440 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.233 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:43:34.805148 1039440 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 14:43:34.805251 1039440 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-751306"
	I0729 14:43:34.805262 1039440 config.go:182] Loaded profile config "default-k8s-diff-port-751306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:43:34.805269 1039440 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-751306"
	I0729 14:43:34.805313 1039440 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-751306"
	I0729 14:43:34.805294 1039440 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-751306"
	W0729 14:43:34.805341 1039440 addons.go:243] addon storage-provisioner should already be in state true
	I0729 14:43:34.805358 1039440 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-751306"
	W0729 14:43:34.805369 1039440 addons.go:243] addon metrics-server should already be in state true
	I0729 14:43:34.805396 1039440 host.go:66] Checking if "default-k8s-diff-port-751306" exists ...
	I0729 14:43:34.805325 1039440 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-751306"
	I0729 14:43:34.805396 1039440 host.go:66] Checking if "default-k8s-diff-port-751306" exists ...
	I0729 14:43:34.805838 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.805869 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.805904 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.805928 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.805968 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.806026 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.806625 1039440 out.go:177] * Verifying Kubernetes components...
	I0729 14:43:34.807999 1039440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:43:34.823091 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39133
	I0729 14:43:34.823103 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35809
	I0729 14:43:34.823532 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.823556 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.824084 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.824111 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.824372 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.824399 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.824427 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.824891 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.825049 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38325
	I0729 14:43:34.825140 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.825191 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.825210 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:43:34.825415 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.825927 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.825945 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.826314 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.826903 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.826939 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.829361 1039440 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-751306"
	W0729 14:43:34.829386 1039440 addons.go:243] addon default-storageclass should already be in state true
	I0729 14:43:34.829417 1039440 host.go:66] Checking if "default-k8s-diff-port-751306" exists ...
	I0729 14:43:34.829785 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.829832 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.841752 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44091
	I0729 14:43:34.842232 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.842938 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.842965 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.843370 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38151
	I0729 14:43:34.843397 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.843713 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:43:34.843818 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.844223 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.844247 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.844615 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.844805 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:43:34.846424 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:43:34.846619 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:43:34.848531 1039440 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 14:43:34.848918 1039440 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:43:34.849006 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35785
	I0729 14:43:34.849421 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.849852 1039440 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 14:43:34.849870 1039440 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 14:43:34.849888 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:43:34.850037 1039440 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:43:34.850053 1039440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 14:43:34.850069 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:43:34.850233 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.850251 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.850659 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.851665 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.851781 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.853937 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.854441 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.854518 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:43:34.854540 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.854589 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:43:34.854779 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:43:34.855035 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:43:34.855098 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:43:34.855114 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.855169 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:43:34.855465 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:43:34.855658 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:43:34.855828 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:43:34.856191 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:43:34.869648 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38917
	I0729 14:43:34.870131 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.870600 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.870618 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.871134 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.871334 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:43:34.873088 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:43:34.873340 1039440 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 14:43:34.873353 1039440 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 14:43:34.873369 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:43:34.876289 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.876751 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:43:34.876765 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.876952 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:43:34.877132 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:43:34.877267 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:43:34.877375 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:43:35.022897 1039440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:43:35.044537 1039440 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-751306" to be "Ready" ...
	I0729 14:43:35.057697 1039440 node_ready.go:49] node "default-k8s-diff-port-751306" has status "Ready":"True"
	I0729 14:43:35.057729 1039440 node_ready.go:38] duration metric: took 13.149458ms for node "default-k8s-diff-port-751306" to be "Ready" ...
	I0729 14:43:35.057744 1039440 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:43:35.073050 1039440 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7qhqh" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:35.150661 1039440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 14:43:35.170721 1039440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:43:35.228871 1039440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 14:43:35.228903 1039440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 14:43:35.276845 1039440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 14:43:35.276880 1039440 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 14:43:35.335623 1039440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:43:35.335656 1039440 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 14:43:35.407804 1039440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:43:35.446540 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.446567 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.446927 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:35.446959 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.446972 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:35.446985 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.446991 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.447286 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.447307 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:35.454199 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.454216 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.454476 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.454495 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:35.824592 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.824615 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.825058 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:35.825441 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.825525 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:35.825567 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.825576 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.827444 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.827454 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:35.827465 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:36.331175 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:36.331202 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:36.331575 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:36.331597 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:36.331607 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:36.331616 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:36.331623 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:36.331923 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:36.331961 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:36.331986 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:36.332003 1039440 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-751306"
	I0729 14:43:36.333995 1039440 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 14:43:36.335441 1039440 addons.go:510] duration metric: took 1.53029708s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 14:43:37.081992 1039440 pod_ready.go:92] pod "coredns-7db6d8ff4d-7qhqh" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.082019 1039440 pod_ready.go:81] duration metric: took 2.008931409s for pod "coredns-7db6d8ff4d-7qhqh" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.082031 1039440 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zxmwx" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.086173 1039440 pod_ready.go:92] pod "coredns-7db6d8ff4d-zxmwx" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.086194 1039440 pod_ready.go:81] duration metric: took 4.154163ms for pod "coredns-7db6d8ff4d-zxmwx" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.086203 1039440 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.090617 1039440 pod_ready.go:92] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.090636 1039440 pod_ready.go:81] duration metric: took 4.42625ms for pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.090647 1039440 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.094929 1039440 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.094950 1039440 pod_ready.go:81] duration metric: took 4.296245ms for pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.094962 1039440 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.099462 1039440 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.099483 1039440 pod_ready.go:81] duration metric: took 4.513354ms for pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.099495 1039440 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tqtjx" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.478252 1039440 pod_ready.go:92] pod "kube-proxy-tqtjx" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.478281 1039440 pod_ready.go:81] duration metric: took 378.778206ms for pod "kube-proxy-tqtjx" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.478295 1039440 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.878655 1039440 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.878678 1039440 pod_ready.go:81] duration metric: took 400.374407ms for pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.878686 1039440 pod_ready.go:38] duration metric: took 2.820929833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:43:37.878702 1039440 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:43:37.878752 1039440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:43:37.894699 1039440 api_server.go:72] duration metric: took 3.08960429s to wait for apiserver process to appear ...
	I0729 14:43:37.894730 1039440 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:43:37.894767 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:43:37.899710 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 200:
	ok
	I0729 14:43:37.900733 1039440 api_server.go:141] control plane version: v1.30.3
	I0729 14:43:37.900757 1039440 api_server.go:131] duration metric: took 6.019707ms to wait for apiserver health ...
	I0729 14:43:37.900765 1039440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:43:38.083157 1039440 system_pods.go:59] 9 kube-system pods found
	I0729 14:43:38.083197 1039440 system_pods.go:61] "coredns-7db6d8ff4d-7qhqh" [88941d43-c67d-4190-896c-edfc4c96b9a8] Running
	I0729 14:43:38.083204 1039440 system_pods.go:61] "coredns-7db6d8ff4d-zxmwx" [13b78c9b-97dc-4313-92d1-76fab481b276] Running
	I0729 14:43:38.083210 1039440 system_pods.go:61] "etcd-default-k8s-diff-port-751306" [11d5216e-a3e3-4ac8-9b00-1b1b04bb1c3e] Running
	I0729 14:43:38.083215 1039440 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-751306" [f9f539b1-374e-4214-b4ac-d6bcb60ca022] Running
	I0729 14:43:38.083221 1039440 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-751306" [07af9a19-2d14-4727-b7b0-ad2f297c1d1a] Running
	I0729 14:43:38.083226 1039440 system_pods.go:61] "kube-proxy-tqtjx" [bd100e13-d714-4ddb-ba43-44be43035b3f] Running
	I0729 14:43:38.083231 1039440 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-751306" [03603694-d75d-4073-8ce9-0ed9bbbe150a] Running
	I0729 14:43:38.083240 1039440 system_pods.go:61] "metrics-server-569cc877fc-z9wg5" [f022dfec-8e97-4679-a7dd-739c9231af82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:43:38.083246 1039440 system_pods.go:61] "storage-provisioner" [a8bf282a-27e8-43f9-a2ac-af6000a4decc] Running
	I0729 14:43:38.083255 1039440 system_pods.go:74] duration metric: took 182.484884ms to wait for pod list to return data ...
	I0729 14:43:38.083269 1039440 default_sa.go:34] waiting for default service account to be created ...
	I0729 14:43:38.277387 1039440 default_sa.go:45] found service account: "default"
	I0729 14:43:38.277418 1039440 default_sa.go:55] duration metric: took 194.142035ms for default service account to be created ...
	I0729 14:43:38.277429 1039440 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 14:43:38.481158 1039440 system_pods.go:86] 9 kube-system pods found
	I0729 14:43:38.481194 1039440 system_pods.go:89] "coredns-7db6d8ff4d-7qhqh" [88941d43-c67d-4190-896c-edfc4c96b9a8] Running
	I0729 14:43:38.481202 1039440 system_pods.go:89] "coredns-7db6d8ff4d-zxmwx" [13b78c9b-97dc-4313-92d1-76fab481b276] Running
	I0729 14:43:38.481210 1039440 system_pods.go:89] "etcd-default-k8s-diff-port-751306" [11d5216e-a3e3-4ac8-9b00-1b1b04bb1c3e] Running
	I0729 14:43:38.481217 1039440 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-751306" [f9f539b1-374e-4214-b4ac-d6bcb60ca022] Running
	I0729 14:43:38.481225 1039440 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-751306" [07af9a19-2d14-4727-b7b0-ad2f297c1d1a] Running
	I0729 14:43:38.481230 1039440 system_pods.go:89] "kube-proxy-tqtjx" [bd100e13-d714-4ddb-ba43-44be43035b3f] Running
	I0729 14:43:38.481236 1039440 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-751306" [03603694-d75d-4073-8ce9-0ed9bbbe150a] Running
	I0729 14:43:38.481248 1039440 system_pods.go:89] "metrics-server-569cc877fc-z9wg5" [f022dfec-8e97-4679-a7dd-739c9231af82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:43:38.481255 1039440 system_pods.go:89] "storage-provisioner" [a8bf282a-27e8-43f9-a2ac-af6000a4decc] Running
	I0729 14:43:38.481267 1039440 system_pods.go:126] duration metric: took 203.830126ms to wait for k8s-apps to be running ...
	I0729 14:43:38.481280 1039440 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 14:43:38.481329 1039440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:43:38.496175 1039440 system_svc.go:56] duration metric: took 14.88714ms WaitForService to wait for kubelet
	I0729 14:43:38.496209 1039440 kubeadm.go:582] duration metric: took 3.691120463s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:43:38.496237 1039440 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:43:38.677820 1039440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:43:38.677847 1039440 node_conditions.go:123] node cpu capacity is 2
	I0729 14:43:38.677859 1039440 node_conditions.go:105] duration metric: took 181.616437ms to run NodePressure ...
	I0729 14:43:38.677874 1039440 start.go:241] waiting for startup goroutines ...
	I0729 14:43:38.677882 1039440 start.go:246] waiting for cluster config update ...
	I0729 14:43:38.677894 1039440 start.go:255] writing updated cluster config ...
	I0729 14:43:38.678166 1039440 ssh_runner.go:195] Run: rm -f paused
	I0729 14:43:38.728616 1039440 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 14:43:38.730494 1039440 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-751306" cluster and "default" namespace by default
	I0729 14:43:40.128244 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:43:40.128447 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:43:55.945251 1038758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.277690166s)
	I0729 14:43:55.945335 1038758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:43:55.960870 1038758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:43:55.971175 1038758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:43:55.981424 1038758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:43:55.981456 1038758 kubeadm.go:157] found existing configuration files:
	
	I0729 14:43:55.981512 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:43:55.992098 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:43:55.992165 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:43:56.002242 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:43:56.011416 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:43:56.011486 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:43:56.020848 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:43:56.030219 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:43:56.030280 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:43:56.039957 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:43:56.049607 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:43:56.049670 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:43:56.059413 1038758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:43:56.109453 1038758 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 14:43:56.109563 1038758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:43:56.230876 1038758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:43:56.231018 1038758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:43:56.231126 1038758 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 14:43:56.244355 1038758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:43:56.246461 1038758 out.go:204]   - Generating certificates and keys ...
	I0729 14:43:56.246573 1038758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:43:56.246666 1038758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:43:56.246755 1038758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:43:56.246843 1038758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:43:56.246964 1038758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:43:56.247169 1038758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:43:56.247267 1038758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:43:56.247365 1038758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:43:56.247473 1038758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:43:56.247588 1038758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:43:56.247646 1038758 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:43:56.247718 1038758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:43:56.593641 1038758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:43:56.714510 1038758 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 14:43:56.862780 1038758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:43:57.010367 1038758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:43:57.108324 1038758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:43:57.109028 1038758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:43:57.111425 1038758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:43:57.113088 1038758 out.go:204]   - Booting up control plane ...
	I0729 14:43:57.113217 1038758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:43:57.113336 1038758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:43:57.113501 1038758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:43:57.135168 1038758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:43:57.141915 1038758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:43:57.142022 1038758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:43:57.269947 1038758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 14:43:57.270056 1038758 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 14:43:57.772110 1038758 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.03343ms
	I0729 14:43:57.772229 1038758 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 14:44:02.773898 1038758 kubeadm.go:310] [api-check] The API server is healthy after 5.00168383s
	I0729 14:44:02.788629 1038758 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 14:44:02.805813 1038758 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 14:44:02.831687 1038758 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 14:44:02.831963 1038758 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-603534 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 14:44:02.842427 1038758 kubeadm.go:310] [bootstrap-token] Using token: hg3j3v.551bb9ju0g9ic9e6
	I0729 14:44:00.129004 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:44:00.129267 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:44:02.844018 1038758 out.go:204]   - Configuring RBAC rules ...
	I0729 14:44:02.844160 1038758 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 14:44:02.851693 1038758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 14:44:02.859496 1038758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 14:44:02.863556 1038758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 14:44:02.866896 1038758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 14:44:02.871375 1038758 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 14:44:03.181687 1038758 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 14:44:03.618445 1038758 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 14:44:04.184562 1038758 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 14:44:04.185548 1038758 kubeadm.go:310] 
	I0729 14:44:04.185655 1038758 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 14:44:04.185689 1038758 kubeadm.go:310] 
	I0729 14:44:04.185788 1038758 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 14:44:04.185801 1038758 kubeadm.go:310] 
	I0729 14:44:04.185825 1038758 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 14:44:04.185906 1038758 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 14:44:04.185983 1038758 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 14:44:04.185992 1038758 kubeadm.go:310] 
	I0729 14:44:04.186079 1038758 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 14:44:04.186090 1038758 kubeadm.go:310] 
	I0729 14:44:04.186155 1038758 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 14:44:04.186165 1038758 kubeadm.go:310] 
	I0729 14:44:04.186231 1038758 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 14:44:04.186337 1038758 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 14:44:04.186431 1038758 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 14:44:04.186441 1038758 kubeadm.go:310] 
	I0729 14:44:04.186575 1038758 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 14:44:04.186679 1038758 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 14:44:04.186689 1038758 kubeadm.go:310] 
	I0729 14:44:04.186810 1038758 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hg3j3v.551bb9ju0g9ic9e6 \
	I0729 14:44:04.186944 1038758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 \
	I0729 14:44:04.186974 1038758 kubeadm.go:310] 	--control-plane 
	I0729 14:44:04.186984 1038758 kubeadm.go:310] 
	I0729 14:44:04.187102 1038758 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 14:44:04.187111 1038758 kubeadm.go:310] 
	I0729 14:44:04.187224 1038758 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hg3j3v.551bb9ju0g9ic9e6 \
	I0729 14:44:04.187375 1038758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 
	I0729 14:44:04.188377 1038758 kubeadm.go:310] W0729 14:43:56.090027    2906 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 14:44:04.188711 1038758 kubeadm.go:310] W0729 14:43:56.090887    2906 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 14:44:04.188834 1038758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:44:04.188852 1038758 cni.go:84] Creating CNI manager for ""
	I0729 14:44:04.188863 1038758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:44:04.190535 1038758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:44:04.191948 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:44:04.203414 1038758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:44:04.223025 1038758 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 14:44:04.223114 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:04.223132 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-603534 minikube.k8s.io/updated_at=2024_07_29T14_44_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411 minikube.k8s.io/name=no-preload-603534 minikube.k8s.io/primary=true
	I0729 14:44:04.240353 1038758 ops.go:34] apiserver oom_adj: -16
	I0729 14:44:04.442077 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:04.942458 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:05.442843 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:05.942138 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:06.442232 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:06.942611 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:07.442939 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:07.942661 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:08.443044 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:08.522590 1038758 kubeadm.go:1113] duration metric: took 4.299548803s to wait for elevateKubeSystemPrivileges
	I0729 14:44:08.522633 1038758 kubeadm.go:394] duration metric: took 5m0.491164642s to StartCluster
	I0729 14:44:08.522657 1038758 settings.go:142] acquiring lock: {Name:mke61e73d7bb1a5bd9c2f4c9e9bba0a07b199ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:44:08.522755 1038758 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:44:08.524573 1038758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:44:08.524893 1038758 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:44:08.524999 1038758 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 14:44:08.525112 1038758 addons.go:69] Setting storage-provisioner=true in profile "no-preload-603534"
	I0729 14:44:08.525150 1038758 addons.go:234] Setting addon storage-provisioner=true in "no-preload-603534"
	I0729 14:44:08.525146 1038758 addons.go:69] Setting default-storageclass=true in profile "no-preload-603534"
	I0729 14:44:08.525155 1038758 config.go:182] Loaded profile config "no-preload-603534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 14:44:08.525167 1038758 addons.go:69] Setting metrics-server=true in profile "no-preload-603534"
	I0729 14:44:08.525182 1038758 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-603534"
	W0729 14:44:08.525162 1038758 addons.go:243] addon storage-provisioner should already be in state true
	I0729 14:44:08.525229 1038758 host.go:66] Checking if "no-preload-603534" exists ...
	I0729 14:44:08.525185 1038758 addons.go:234] Setting addon metrics-server=true in "no-preload-603534"
	W0729 14:44:08.525264 1038758 addons.go:243] addon metrics-server should already be in state true
	I0729 14:44:08.525294 1038758 host.go:66] Checking if "no-preload-603534" exists ...
	I0729 14:44:08.525510 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.525553 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.525652 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.525668 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.525688 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.525715 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.526581 1038758 out.go:177] * Verifying Kubernetes components...
	I0729 14:44:08.527919 1038758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:44:08.541874 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43521
	I0729 14:44:08.542126 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34697
	I0729 14:44:08.542251 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35699
	I0729 14:44:08.542397 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.542505 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.542664 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.542948 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.542969 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.543075 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.543090 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.543115 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.543127 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.543323 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.543546 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.543551 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.543758 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.543779 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.544014 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.544035 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.544149 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:44:08.548026 1038758 addons.go:234] Setting addon default-storageclass=true in "no-preload-603534"
	W0729 14:44:08.548048 1038758 addons.go:243] addon default-storageclass should already be in state true
	I0729 14:44:08.548079 1038758 host.go:66] Checking if "no-preload-603534" exists ...
	I0729 14:44:08.548457 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.548478 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.559699 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36211
	I0729 14:44:08.560297 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.560916 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.560953 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.561332 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.561519 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:44:08.563422 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:44:08.564073 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42429
	I0729 14:44:08.564524 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.565011 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.565038 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.565427 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.565592 1038758 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 14:44:08.565752 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:44:08.566901 1038758 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 14:44:08.566921 1038758 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 14:44:08.566941 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:44:08.567688 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:44:08.568067 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34485
	I0729 14:44:08.568443 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.569019 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.569040 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.569462 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.569583 1038758 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:44:08.570038 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.570074 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.571187 1038758 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:44:08.571204 1038758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 14:44:08.571223 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:44:08.571595 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.572203 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:44:08.572247 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.572506 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:44:08.572704 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:44:08.572893 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:44:08.573100 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:44:08.574551 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.574900 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:44:08.574919 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.575074 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:44:08.575286 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:44:08.575427 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:44:08.575551 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:44:08.585902 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40045
	I0729 14:44:08.586319 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.586778 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.586803 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.587135 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.587357 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:44:08.588606 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:44:08.588827 1038758 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 14:44:08.588844 1038758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 14:44:08.588861 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:44:08.591169 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.591434 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:44:08.591466 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.591600 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:44:08.591766 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:44:08.591873 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:44:08.592103 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:44:08.752015 1038758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:44:08.775498 1038758 node_ready.go:35] waiting up to 6m0s for node "no-preload-603534" to be "Ready" ...
	I0729 14:44:08.788547 1038758 node_ready.go:49] node "no-preload-603534" has status "Ready":"True"
	I0729 14:44:08.788572 1038758 node_ready.go:38] duration metric: took 13.040411ms for node "no-preload-603534" to be "Ready" ...
	I0729 14:44:08.788582 1038758 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:44:08.793475 1038758 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:08.861468 1038758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:44:08.869542 1038758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 14:44:08.869567 1038758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 14:44:08.898398 1038758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 14:44:08.911120 1038758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 14:44:08.911148 1038758 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 14:44:08.931151 1038758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:44:08.931179 1038758 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 14:44:08.976093 1038758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:44:09.449857 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.449885 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.449863 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.449958 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.450343 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.450354 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.450361 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.450373 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.450374 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.450389 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.450442 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.450455 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.450476 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.450487 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.450620 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.450635 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.450637 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.450779 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.450799 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.493934 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.493959 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.494303 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.494320 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.494342 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.706038 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.706072 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.706366 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.706382 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.706391 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.706398 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.707956 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.707958 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.707986 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.708015 1038758 addons.go:475] Verifying addon metrics-server=true in "no-preload-603534"
	I0729 14:44:09.709729 1038758 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 14:44:09.711283 1038758 addons.go:510] duration metric: took 1.186289164s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 14:44:10.807976 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"False"
	I0729 14:44:13.300325 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"False"
	I0729 14:44:15.800886 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"False"
	I0729 14:44:18.300042 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"False"
	I0729 14:44:18.800080 1038758 pod_ready.go:92] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.800111 1038758 pod_ready.go:81] duration metric: took 10.006613711s for pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.800124 1038758 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-vn8z4" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.804949 1038758 pod_ready.go:92] pod "coredns-5cfdc65f69-vn8z4" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.804974 1038758 pod_ready.go:81] duration metric: took 4.840477ms for pod "coredns-5cfdc65f69-vn8z4" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.804985 1038758 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.810160 1038758 pod_ready.go:92] pod "etcd-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.810176 1038758 pod_ready.go:81] duration metric: took 5.184516ms for pod "etcd-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.810185 1038758 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.814785 1038758 pod_ready.go:92] pod "kube-apiserver-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.814807 1038758 pod_ready.go:81] duration metric: took 4.615516ms for pod "kube-apiserver-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.814819 1038758 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.819023 1038758 pod_ready.go:92] pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.819044 1038758 pod_ready.go:81] duration metric: took 4.215656ms for pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.819056 1038758 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7mr4z" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:19.198226 1038758 pod_ready.go:92] pod "kube-proxy-7mr4z" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:19.198252 1038758 pod_ready.go:81] duration metric: took 379.18928ms for pod "kube-proxy-7mr4z" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:19.198265 1038758 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:19.598783 1038758 pod_ready.go:92] pod "kube-scheduler-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:19.598824 1038758 pod_ready.go:81] duration metric: took 400.55255ms for pod "kube-scheduler-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:19.598835 1038758 pod_ready.go:38] duration metric: took 10.810240266s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:44:19.598865 1038758 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:44:19.598931 1038758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:44:19.615165 1038758 api_server.go:72] duration metric: took 11.090236578s to wait for apiserver process to appear ...
	I0729 14:44:19.615190 1038758 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:44:19.615211 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:44:19.619574 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0729 14:44:19.620586 1038758 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 14:44:19.620610 1038758 api_server.go:131] duration metric: took 5.412598ms to wait for apiserver health ...
	I0729 14:44:19.620620 1038758 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:44:19.802376 1038758 system_pods.go:59] 9 kube-system pods found
	I0729 14:44:19.802408 1038758 system_pods.go:61] "coredns-5cfdc65f69-m6q8r" [b3a0c38d-1587-4fdf-b2e6-58d364ca400b] Running
	I0729 14:44:19.802415 1038758 system_pods.go:61] "coredns-5cfdc65f69-vn8z4" [4654aadf-7870-46b6-96e6-5948239fbe22] Running
	I0729 14:44:19.802420 1038758 system_pods.go:61] "etcd-no-preload-603534" [01737765-56ad-4305-aa98-d531dd1fadb4] Running
	I0729 14:44:19.802429 1038758 system_pods.go:61] "kube-apiserver-no-preload-603534" [141fffbe-df4b-4de1-9d78-f1acf0b837a6] Running
	I0729 14:44:19.802434 1038758 system_pods.go:61] "kube-controller-manager-no-preload-603534" [39c980ec-50f7-4af1-b931-1a446775c934] Running
	I0729 14:44:19.802441 1038758 system_pods.go:61] "kube-proxy-7mr4z" [17de173c-2b95-4b35-a9d7-b38f065270cb] Running
	I0729 14:44:19.802446 1038758 system_pods.go:61] "kube-scheduler-no-preload-603534" [8d896d6c-43b9-4bc8-9994-41b0bd4b636d] Running
	I0729 14:44:19.802454 1038758 system_pods.go:61] "metrics-server-78fcd8795b-852x6" [637fea9b-2924-4593-a4e2-99a33ab613d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:44:19.802470 1038758 system_pods.go:61] "storage-provisioner" [7336eb38-d53d-4456-8367-cf843abe5cb5] Running
	I0729 14:44:19.802482 1038758 system_pods.go:74] duration metric: took 181.853357ms to wait for pod list to return data ...
	I0729 14:44:19.802491 1038758 default_sa.go:34] waiting for default service account to be created ...
	I0729 14:44:19.998312 1038758 default_sa.go:45] found service account: "default"
	I0729 14:44:19.998348 1038758 default_sa.go:55] duration metric: took 195.845187ms for default service account to be created ...
	I0729 14:44:19.998361 1038758 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 14:44:20.201742 1038758 system_pods.go:86] 9 kube-system pods found
	I0729 14:44:20.201778 1038758 system_pods.go:89] "coredns-5cfdc65f69-m6q8r" [b3a0c38d-1587-4fdf-b2e6-58d364ca400b] Running
	I0729 14:44:20.201787 1038758 system_pods.go:89] "coredns-5cfdc65f69-vn8z4" [4654aadf-7870-46b6-96e6-5948239fbe22] Running
	I0729 14:44:20.201793 1038758 system_pods.go:89] "etcd-no-preload-603534" [01737765-56ad-4305-aa98-d531dd1fadb4] Running
	I0729 14:44:20.201800 1038758 system_pods.go:89] "kube-apiserver-no-preload-603534" [141fffbe-df4b-4de1-9d78-f1acf0b837a6] Running
	I0729 14:44:20.201807 1038758 system_pods.go:89] "kube-controller-manager-no-preload-603534" [39c980ec-50f7-4af1-b931-1a446775c934] Running
	I0729 14:44:20.201812 1038758 system_pods.go:89] "kube-proxy-7mr4z" [17de173c-2b95-4b35-a9d7-b38f065270cb] Running
	I0729 14:44:20.201818 1038758 system_pods.go:89] "kube-scheduler-no-preload-603534" [8d896d6c-43b9-4bc8-9994-41b0bd4b636d] Running
	I0729 14:44:20.201826 1038758 system_pods.go:89] "metrics-server-78fcd8795b-852x6" [637fea9b-2924-4593-a4e2-99a33ab613d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:44:20.201835 1038758 system_pods.go:89] "storage-provisioner" [7336eb38-d53d-4456-8367-cf843abe5cb5] Running
	I0729 14:44:20.201850 1038758 system_pods.go:126] duration metric: took 203.481528ms to wait for k8s-apps to be running ...
	I0729 14:44:20.201860 1038758 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 14:44:20.201914 1038758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:44:20.217416 1038758 system_svc.go:56] duration metric: took 15.543768ms WaitForService to wait for kubelet
	I0729 14:44:20.217445 1038758 kubeadm.go:582] duration metric: took 11.692521258s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:44:20.217464 1038758 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:44:20.398667 1038758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:44:20.398696 1038758 node_conditions.go:123] node cpu capacity is 2
	I0729 14:44:20.398708 1038758 node_conditions.go:105] duration metric: took 181.238886ms to run NodePressure ...
	I0729 14:44:20.398720 1038758 start.go:241] waiting for startup goroutines ...
	I0729 14:44:20.398727 1038758 start.go:246] waiting for cluster config update ...
	I0729 14:44:20.398738 1038758 start.go:255] writing updated cluster config ...
	I0729 14:44:20.399014 1038758 ssh_runner.go:195] Run: rm -f paused
	I0729 14:44:20.452187 1038758 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 14:44:20.454434 1038758 out.go:177] * Done! kubectl is now configured to use "no-preload-603534" cluster and "default" namespace by default
	I0729 14:44:40.130597 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:44:40.130831 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:44:40.130848 1039759 kubeadm.go:310] 
	I0729 14:44:40.130903 1039759 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 14:44:40.130956 1039759 kubeadm.go:310] 		timed out waiting for the condition
	I0729 14:44:40.130966 1039759 kubeadm.go:310] 
	I0729 14:44:40.131032 1039759 kubeadm.go:310] 	This error is likely caused by:
	I0729 14:44:40.131110 1039759 kubeadm.go:310] 		- The kubelet is not running
	I0729 14:44:40.131256 1039759 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 14:44:40.131270 1039759 kubeadm.go:310] 
	I0729 14:44:40.131450 1039759 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 14:44:40.131499 1039759 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 14:44:40.131542 1039759 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 14:44:40.131552 1039759 kubeadm.go:310] 
	I0729 14:44:40.131686 1039759 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 14:44:40.131795 1039759 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 14:44:40.131806 1039759 kubeadm.go:310] 
	I0729 14:44:40.131947 1039759 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 14:44:40.132064 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 14:44:40.132162 1039759 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 14:44:40.132254 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 14:44:40.132264 1039759 kubeadm.go:310] 
	I0729 14:44:40.133208 1039759 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:44:40.133363 1039759 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 14:44:40.133468 1039759 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 14:44:40.133610 1039759 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 14:44:40.133676 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:44:40.607039 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:44:40.623771 1039759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:44:40.636278 1039759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:44:40.636310 1039759 kubeadm.go:157] found existing configuration files:
	
	I0729 14:44:40.636371 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:44:40.647768 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:44:40.647827 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:44:40.658281 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:44:40.668393 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:44:40.668477 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:44:40.678521 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:44:40.687891 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:44:40.687960 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:44:40.698384 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:44:40.708965 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:44:40.709047 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:44:40.719665 1039759 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:44:40.796786 1039759 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 14:44:40.796883 1039759 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:44:40.946106 1039759 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:44:40.946258 1039759 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:44:40.946388 1039759 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 14:44:41.140483 1039759 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:44:41.142390 1039759 out.go:204]   - Generating certificates and keys ...
	I0729 14:44:41.142503 1039759 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:44:41.142610 1039759 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:44:41.142722 1039759 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:44:41.142811 1039759 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:44:41.142910 1039759 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:44:41.142995 1039759 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:44:41.143092 1039759 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:44:41.143180 1039759 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:44:41.143279 1039759 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:44:41.143390 1039759 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:44:41.143445 1039759 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:44:41.143524 1039759 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:44:41.188854 1039759 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:44:41.329957 1039759 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:44:41.968599 1039759 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:44:42.034788 1039759 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:44:42.055543 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:44:42.056622 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:44:42.056715 1039759 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:44:42.204165 1039759 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:44:42.205935 1039759 out.go:204]   - Booting up control plane ...
	I0729 14:44:42.206076 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:44:42.216259 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:44:42.217947 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:44:42.219361 1039759 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:44:42.221672 1039759 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 14:45:22.223830 1039759 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 14:45:22.223940 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:22.224139 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:45:27.224303 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:27.224574 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:45:37.224905 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:37.225090 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:45:57.226285 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:57.226533 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:46:37.227279 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:46:37.227485 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:46:37.227494 1039759 kubeadm.go:310] 
	I0729 14:46:37.227528 1039759 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 14:46:37.227605 1039759 kubeadm.go:310] 		timed out waiting for the condition
	I0729 14:46:37.227627 1039759 kubeadm.go:310] 
	I0729 14:46:37.227683 1039759 kubeadm.go:310] 	This error is likely caused by:
	I0729 14:46:37.227732 1039759 kubeadm.go:310] 		- The kubelet is not running
	I0729 14:46:37.227861 1039759 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 14:46:37.227870 1039759 kubeadm.go:310] 
	I0729 14:46:37.228011 1039759 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 14:46:37.228093 1039759 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 14:46:37.228140 1039759 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 14:46:37.228173 1039759 kubeadm.go:310] 
	I0729 14:46:37.228310 1039759 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 14:46:37.228443 1039759 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 14:46:37.228454 1039759 kubeadm.go:310] 
	I0729 14:46:37.228606 1039759 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 14:46:37.228714 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 14:46:37.228821 1039759 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 14:46:37.228913 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 14:46:37.228934 1039759 kubeadm.go:310] 
	I0729 14:46:37.229926 1039759 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:46:37.230070 1039759 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 14:46:37.230175 1039759 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 14:46:37.230284 1039759 kubeadm.go:394] duration metric: took 7m57.608522587s to StartCluster
	I0729 14:46:37.230347 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:46:37.230435 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:46:37.276238 1039759 cri.go:89] found id: ""
	I0729 14:46:37.276294 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.276304 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:46:37.276317 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:46:37.276439 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:46:37.309934 1039759 cri.go:89] found id: ""
	I0729 14:46:37.309960 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.309969 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:46:37.309975 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:46:37.310031 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:46:37.343286 1039759 cri.go:89] found id: ""
	I0729 14:46:37.343312 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.343320 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:46:37.343325 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:46:37.343384 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:46:37.378735 1039759 cri.go:89] found id: ""
	I0729 14:46:37.378763 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.378773 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:46:37.378779 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:46:37.378834 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:46:37.414244 1039759 cri.go:89] found id: ""
	I0729 14:46:37.414275 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.414284 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:46:37.414290 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:46:37.414372 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:46:37.453809 1039759 cri.go:89] found id: ""
	I0729 14:46:37.453842 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.453858 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:46:37.453866 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:46:37.453955 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:46:37.492250 1039759 cri.go:89] found id: ""
	I0729 14:46:37.492279 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.492288 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:46:37.492294 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:46:37.492360 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:46:37.554342 1039759 cri.go:89] found id: ""
	I0729 14:46:37.554377 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.554388 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:46:37.554404 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:46:37.554422 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:46:37.631118 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:46:37.631165 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:46:37.650991 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:46:37.651047 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:46:37.731852 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:46:37.731880 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:46:37.731897 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:46:37.849049 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:46:37.849092 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0729 14:46:37.893957 1039759 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 14:46:37.894031 1039759 out.go:239] * 
	W0729 14:46:37.894120 1039759 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 14:46:37.894150 1039759 out.go:239] * 
	W0729 14:46:37.895278 1039759 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 14:46:37.898735 1039759 out.go:177] 
	W0729 14:46:37.900049 1039759 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 14:46:37.900115 1039759 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 14:46:37.900146 1039759 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 14:46:37.901531 1039759 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 14:46:39 old-k8s-version-360866 crio[647]: time="2024-07-29 14:46:39.754962828Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722264399754943393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3512f308-1b84-45e5-8be3-4b5a8224cc5d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:46:39 old-k8s-version-360866 crio[647]: time="2024-07-29 14:46:39.755517274Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=207e6e6b-58c3-4e86-8025-ca650d1beb54 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:46:39 old-k8s-version-360866 crio[647]: time="2024-07-29 14:46:39.755593036Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=207e6e6b-58c3-4e86-8025-ca650d1beb54 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:46:39 old-k8s-version-360866 crio[647]: time="2024-07-29 14:46:39.755699212Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=207e6e6b-58c3-4e86-8025-ca650d1beb54 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:46:39 old-k8s-version-360866 crio[647]: time="2024-07-29 14:46:39.795442979Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=df5ba3ed-21eb-4c0b-9dcb-fc834f97d1c0 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:46:39 old-k8s-version-360866 crio[647]: time="2024-07-29 14:46:39.795565469Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=df5ba3ed-21eb-4c0b-9dcb-fc834f97d1c0 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:46:39 old-k8s-version-360866 crio[647]: time="2024-07-29 14:46:39.796775286Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a7113925-3c29-49b0-8ef9-917e54634d1e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:46:39 old-k8s-version-360866 crio[647]: time="2024-07-29 14:46:39.797206347Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722264399797186112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a7113925-3c29-49b0-8ef9-917e54634d1e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:46:39 old-k8s-version-360866 crio[647]: time="2024-07-29 14:46:39.797935659Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6d21404-0d8a-46b5-a5c6-603d30568528 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:46:39 old-k8s-version-360866 crio[647]: time="2024-07-29 14:46:39.797983922Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6d21404-0d8a-46b5-a5c6-603d30568528 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:46:39 old-k8s-version-360866 crio[647]: time="2024-07-29 14:46:39.798014641Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b6d21404-0d8a-46b5-a5c6-603d30568528 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:46:39 old-k8s-version-360866 crio[647]: time="2024-07-29 14:46:39.832815342Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=970dd4e8-c72b-446d-989f-acc3bf3308ca name=/runtime.v1.RuntimeService/Version
	Jul 29 14:46:39 old-k8s-version-360866 crio[647]: time="2024-07-29 14:46:39.832932842Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=970dd4e8-c72b-446d-989f-acc3bf3308ca name=/runtime.v1.RuntimeService/Version
	Jul 29 14:46:39 old-k8s-version-360866 crio[647]: time="2024-07-29 14:46:39.834484059Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a5ae5b29-d55d-4619-a63d-8ad4488c43c2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:46:39 old-k8s-version-360866 crio[647]: time="2024-07-29 14:46:39.835050297Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722264399835016770,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5ae5b29-d55d-4619-a63d-8ad4488c43c2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:46:39 old-k8s-version-360866 crio[647]: time="2024-07-29 14:46:39.835813098Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3dda9fb-9acc-4edb-8bf2-18f70564cff9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:46:39 old-k8s-version-360866 crio[647]: time="2024-07-29 14:46:39.835896403Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3dda9fb-9acc-4edb-8bf2-18f70564cff9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:46:39 old-k8s-version-360866 crio[647]: time="2024-07-29 14:46:39.835943781Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e3dda9fb-9acc-4edb-8bf2-18f70564cff9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:46:39 old-k8s-version-360866 crio[647]: time="2024-07-29 14:46:39.872709635Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=91b3c4a6-1b8a-450a-93df-9797b37d05ec name=/runtime.v1.RuntimeService/Version
	Jul 29 14:46:39 old-k8s-version-360866 crio[647]: time="2024-07-29 14:46:39.872839490Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=91b3c4a6-1b8a-450a-93df-9797b37d05ec name=/runtime.v1.RuntimeService/Version
	Jul 29 14:46:39 old-k8s-version-360866 crio[647]: time="2024-07-29 14:46:39.874024328Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d718a94-c406-43ff-87ad-60345303f390 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:46:39 old-k8s-version-360866 crio[647]: time="2024-07-29 14:46:39.874412396Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722264399874388301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d718a94-c406-43ff-87ad-60345303f390 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:46:39 old-k8s-version-360866 crio[647]: time="2024-07-29 14:46:39.874964921Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b87dfbf3-a68a-4c1c-88b2-b5f1b177b1d6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:46:39 old-k8s-version-360866 crio[647]: time="2024-07-29 14:46:39.875008699Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b87dfbf3-a68a-4c1c-88b2-b5f1b177b1d6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:46:39 old-k8s-version-360866 crio[647]: time="2024-07-29 14:46:39.875037195Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b87dfbf3-a68a-4c1c-88b2-b5f1b177b1d6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul29 14:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057215] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.048160] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.116415] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.599440] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.593896] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.539359] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.065084] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070151] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.197280] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.136393] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.263438] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +6.439493] systemd-fstab-generator[833]: Ignoring "noauto" option for root device
	[  +0.060871] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.313828] systemd-fstab-generator[958]: Ignoring "noauto" option for root device
	[ +12.194292] kauditd_printk_skb: 46 callbacks suppressed
	[Jul29 14:42] systemd-fstab-generator[4994]: Ignoring "noauto" option for root device
	[Jul29 14:44] systemd-fstab-generator[5279]: Ignoring "noauto" option for root device
	[  +0.064843] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:46:40 up 8 min,  0 users,  load average: 0.40, 0.16, 0.07
	Linux old-k8s-version-360866 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 29 14:46:36 old-k8s-version-360866 kubelet[5461]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc00024c0e0, 0xc000ba6210, 0xc000ba6210, 0x0, 0x0)
	Jul 29 14:46:36 old-k8s-version-360866 kubelet[5461]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Jul 29 14:46:36 old-k8s-version-360866 kubelet[5461]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0008c0c40)
	Jul 29 14:46:36 old-k8s-version-360866 kubelet[5461]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Jul 29 14:46:36 old-k8s-version-360866 kubelet[5461]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jul 29 14:46:36 old-k8s-version-360866 kubelet[5461]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Jul 29 14:46:36 old-k8s-version-360866 kubelet[5461]: goroutine 150 [runnable]:
	Jul 29 14:46:36 old-k8s-version-360866 kubelet[5461]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000050500, 0x1, 0x0, 0x0, 0x0, 0x0)
	Jul 29 14:46:36 old-k8s-version-360866 kubelet[5461]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Jul 29 14:46:36 old-k8s-version-360866 kubelet[5461]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000d173e0, 0x0, 0x0)
	Jul 29 14:46:36 old-k8s-version-360866 kubelet[5461]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Jul 29 14:46:36 old-k8s-version-360866 kubelet[5461]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0008c0c40)
	Jul 29 14:46:36 old-k8s-version-360866 kubelet[5461]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jul 29 14:46:36 old-k8s-version-360866 kubelet[5461]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jul 29 14:46:36 old-k8s-version-360866 kubelet[5461]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jul 29 14:46:36 old-k8s-version-360866 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 29 14:46:36 old-k8s-version-360866 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 29 14:46:37 old-k8s-version-360866 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jul 29 14:46:37 old-k8s-version-360866 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 29 14:46:37 old-k8s-version-360866 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 29 14:46:37 old-k8s-version-360866 kubelet[5506]: I0729 14:46:37.624433    5506 server.go:416] Version: v1.20.0
	Jul 29 14:46:37 old-k8s-version-360866 kubelet[5506]: I0729 14:46:37.624717    5506 server.go:837] Client rotation is on, will bootstrap in background
	Jul 29 14:46:37 old-k8s-version-360866 kubelet[5506]: I0729 14:46:37.626499    5506 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 29 14:46:37 old-k8s-version-360866 kubelet[5506]: W0729 14:46:37.627356    5506 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 29 14:46:37 old-k8s-version-360866 kubelet[5506]: I0729 14:46:37.627663    5506 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-360866 -n old-k8s-version-360866
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-360866 -n old-k8s-version-360866: exit status 2 (244.030447ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-360866" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (707.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0729 14:42:59.776023  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/custom-flannel-513289/client.crt: no such file or directory
E0729 14:43:08.881770  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/enable-default-cni-513289/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-668123 -n embed-certs-668123
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-29 14:51:34.23166744 +0000 UTC m=+5980.639391206
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-668123 -n embed-certs-668123
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-668123 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-668123 logs -n 25: (2.095474695s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-513289 sudo cat                             | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo                                 | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo                                 | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo                                 | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo find                            | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo crio                            | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-513289                                      | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	| delete  | -p                                                     | disable-driver-mounts-054967 | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | disable-driver-mounts-054967                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:31 UTC |
	|         | default-k8s-diff-port-751306                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-603534             | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:30 UTC | 29 Jul 24 14:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-603534                                   | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-668123            | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC | 29 Jul 24 14:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-668123                                  | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-751306  | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC | 29 Jul 24 14:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC |                     |
	|         | default-k8s-diff-port-751306                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-603534                  | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-603534 --memory=2200                     | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:32 UTC | 29 Jul 24 14:44 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-360866        | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:33 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-668123                 | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-668123                                  | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:33 UTC | 29 Jul 24 14:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-751306       | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC | 29 Jul 24 14:43 UTC |
	|         | default-k8s-diff-port-751306                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-360866                              | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC | 29 Jul 24 14:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-360866             | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC | 29 Jul 24 14:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-360866                              | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 14:34:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 14:34:53.874295 1039759 out.go:291] Setting OutFile to fd 1 ...
	I0729 14:34:53.874567 1039759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:34:53.874577 1039759 out.go:304] Setting ErrFile to fd 2...
	I0729 14:34:53.874580 1039759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:34:53.874774 1039759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 14:34:53.875294 1039759 out.go:298] Setting JSON to false
	I0729 14:34:53.876313 1039759 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":15446,"bootTime":1722248248,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 14:34:53.876373 1039759 start.go:139] virtualization: kvm guest
	I0729 14:34:53.878446 1039759 out.go:177] * [old-k8s-version-360866] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 14:34:53.879820 1039759 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 14:34:53.879855 1039759 notify.go:220] Checking for updates...
	I0729 14:34:53.882201 1039759 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 14:34:53.883330 1039759 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:34:53.884514 1039759 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:34:53.885734 1039759 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 14:34:53.886894 1039759 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 14:34:53.888361 1039759 config.go:182] Loaded profile config "old-k8s-version-360866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 14:34:53.888789 1039759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:34:53.888850 1039759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:34:53.903960 1039759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I0729 14:34:53.904467 1039759 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:34:53.905083 1039759 main.go:141] libmachine: Using API Version  1
	I0729 14:34:53.905112 1039759 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:34:53.905449 1039759 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:34:53.905609 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:34:53.907360 1039759 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 14:34:53.908710 1039759 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 14:34:53.909026 1039759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:34:53.909064 1039759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:34:53.923834 1039759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0729 14:34:53.924300 1039759 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:34:53.924787 1039759 main.go:141] libmachine: Using API Version  1
	I0729 14:34:53.924809 1039759 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:34:53.925150 1039759 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:34:53.925352 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:34:53.960368 1039759 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 14:34:53.961649 1039759 start.go:297] selected driver: kvm2
	I0729 14:34:53.961662 1039759 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:34:53.961778 1039759 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 14:34:53.962398 1039759 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:34:53.962459 1039759 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19338-974764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 14:34:53.977941 1039759 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 14:34:53.978311 1039759 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:34:53.978341 1039759 cni.go:84] Creating CNI manager for ""
	I0729 14:34:53.978350 1039759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:34:53.978395 1039759 start.go:340] cluster config:
	{Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:34:53.978499 1039759 iso.go:125] acquiring lock: {Name:mk2bc72146110e230952d77b90cad2ea8182c9d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:34:53.980167 1039759 out.go:177] * Starting "old-k8s-version-360866" primary control-plane node in "old-k8s-version-360866" cluster
	I0729 14:34:55.588663 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:34:53.981356 1039759 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 14:34:53.981390 1039759 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 14:34:53.981400 1039759 cache.go:56] Caching tarball of preloaded images
	I0729 14:34:53.981477 1039759 preload.go:172] Found /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 14:34:53.981487 1039759 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 14:34:53.981600 1039759 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/config.json ...
	I0729 14:34:53.981775 1039759 start.go:360] acquireMachinesLock for old-k8s-version-360866: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 14:34:58.660730 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:04.740665 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:07.812781 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:13.892659 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:16.964692 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:23.044749 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:26.116761 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:32.196730 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:35.268709 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:41.348712 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:44.420693 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:50.500715 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:53.572717 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:59.652707 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:02.724722 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:08.804719 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:11.876665 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:17.956684 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:21.028707 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:27.108667 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:30.180710 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:36.260645 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:39.332717 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:45.412694 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:48.484713 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:54.564703 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:57.636707 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:03.716690 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:06.788660 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:12.868658 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:15.940708 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:22.020684 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:25.092712 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:31.172710 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:34.177216 1039263 start.go:364] duration metric: took 3m42.890532077s to acquireMachinesLock for "embed-certs-668123"
	I0729 14:37:34.177291 1039263 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:37:34.177300 1039263 fix.go:54] fixHost starting: 
	I0729 14:37:34.177641 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:37:34.177673 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:37:34.193427 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37577
	I0729 14:37:34.193879 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:37:34.194396 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:37:34.194421 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:37:34.194774 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:37:34.194987 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:34.195156 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:37:34.196597 1039263 fix.go:112] recreateIfNeeded on embed-certs-668123: state=Stopped err=<nil>
	I0729 14:37:34.196642 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	W0729 14:37:34.196802 1039263 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:37:34.198564 1039263 out.go:177] * Restarting existing kvm2 VM for "embed-certs-668123" ...
	I0729 14:37:34.199926 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Start
	I0729 14:37:34.200086 1039263 main.go:141] libmachine: (embed-certs-668123) Ensuring networks are active...
	I0729 14:37:34.200833 1039263 main.go:141] libmachine: (embed-certs-668123) Ensuring network default is active
	I0729 14:37:34.201159 1039263 main.go:141] libmachine: (embed-certs-668123) Ensuring network mk-embed-certs-668123 is active
	I0729 14:37:34.201578 1039263 main.go:141] libmachine: (embed-certs-668123) Getting domain xml...
	I0729 14:37:34.202214 1039263 main.go:141] libmachine: (embed-certs-668123) Creating domain...
	I0729 14:37:34.510575 1039263 main.go:141] libmachine: (embed-certs-668123) Waiting to get IP...
	I0729 14:37:34.511459 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:34.511909 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:34.512006 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:34.511904 1040307 retry.go:31] will retry after 294.890973ms: waiting for machine to come up
	I0729 14:37:34.808513 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:34.809044 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:34.809070 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:34.809007 1040307 retry.go:31] will retry after 296.152247ms: waiting for machine to come up
	I0729 14:37:35.106423 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:35.106839 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:35.106872 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:35.106773 1040307 retry.go:31] will retry after 384.830082ms: waiting for machine to come up
	I0729 14:37:35.493463 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:35.493902 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:35.493933 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:35.493861 1040307 retry.go:31] will retry after 490.673812ms: waiting for machine to come up
	I0729 14:37:35.986675 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:35.987184 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:35.987235 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:35.987099 1040307 retry.go:31] will retry after 725.022775ms: waiting for machine to come up
	I0729 14:37:34.174673 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:37:34.174713 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:37:34.175060 1038758 buildroot.go:166] provisioning hostname "no-preload-603534"
	I0729 14:37:34.175084 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:37:34.175279 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:37:34.177042 1038758 machine.go:97] duration metric: took 4m37.39644293s to provisionDockerMachine
	I0729 14:37:34.177087 1038758 fix.go:56] duration metric: took 4m37.417815827s for fixHost
	I0729 14:37:34.177094 1038758 start.go:83] releasing machines lock for "no-preload-603534", held for 4m37.417912853s
	W0729 14:37:34.177127 1038758 start.go:714] error starting host: provision: host is not running
	W0729 14:37:34.177230 1038758 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 14:37:34.177240 1038758 start.go:729] Will try again in 5 seconds ...
	I0729 14:37:36.714078 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:36.714502 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:36.714565 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:36.714389 1040307 retry.go:31] will retry after 722.684756ms: waiting for machine to come up
	I0729 14:37:37.438316 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:37.438859 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:37.438891 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:37.438802 1040307 retry.go:31] will retry after 1.163999997s: waiting for machine to come up
	I0729 14:37:38.604109 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:38.604503 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:38.604531 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:38.604469 1040307 retry.go:31] will retry after 1.401566003s: waiting for machine to come up
	I0729 14:37:40.007310 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:40.007900 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:40.007929 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:40.007839 1040307 retry.go:31] will retry after 1.40470791s: waiting for machine to come up
	I0729 14:37:39.178982 1038758 start.go:360] acquireMachinesLock for no-preload-603534: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 14:37:41.414509 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:41.415018 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:41.415049 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:41.414959 1040307 retry.go:31] will retry after 2.205183048s: waiting for machine to come up
	I0729 14:37:43.623427 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:43.623894 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:43.623922 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:43.623856 1040307 retry.go:31] will retry after 2.444881913s: waiting for machine to come up
	I0729 14:37:46.070961 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:46.071314 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:46.071338 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:46.071271 1040307 retry.go:31] will retry after 3.115189863s: waiting for machine to come up
	I0729 14:37:49.187610 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:49.188107 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:49.188134 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:49.188054 1040307 retry.go:31] will retry after 3.139484284s: waiting for machine to come up
	I0729 14:37:53.653416 1039440 start.go:364] duration metric: took 3m41.12464482s to acquireMachinesLock for "default-k8s-diff-port-751306"
	I0729 14:37:53.653486 1039440 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:37:53.653494 1039440 fix.go:54] fixHost starting: 
	I0729 14:37:53.653880 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:37:53.653913 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:37:53.671499 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34797
	I0729 14:37:53.671927 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:37:53.672550 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:37:53.672584 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:37:53.672986 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:37:53.673198 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:37:53.673353 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:37:53.674706 1039440 fix.go:112] recreateIfNeeded on default-k8s-diff-port-751306: state=Stopped err=<nil>
	I0729 14:37:53.674736 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	W0729 14:37:53.674896 1039440 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:37:53.677098 1039440 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-751306" ...
	I0729 14:37:52.329477 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.329880 1039263 main.go:141] libmachine: (embed-certs-668123) Found IP for machine: 192.168.50.53
	I0729 14:37:52.329895 1039263 main.go:141] libmachine: (embed-certs-668123) Reserving static IP address...
	I0729 14:37:52.329906 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has current primary IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.330376 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "embed-certs-668123", mac: "52:54:00:a3:92:a4", ip: "192.168.50.53"} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.330414 1039263 main.go:141] libmachine: (embed-certs-668123) Reserved static IP address: 192.168.50.53
	I0729 14:37:52.330433 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | skip adding static IP to network mk-embed-certs-668123 - found existing host DHCP lease matching {name: "embed-certs-668123", mac: "52:54:00:a3:92:a4", ip: "192.168.50.53"}
	I0729 14:37:52.330453 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Getting to WaitForSSH function...
	I0729 14:37:52.330465 1039263 main.go:141] libmachine: (embed-certs-668123) Waiting for SSH to be available...
	I0729 14:37:52.332510 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.332794 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.332821 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.332897 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Using SSH client type: external
	I0729 14:37:52.332931 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa (-rw-------)
	I0729 14:37:52.332963 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:37:52.332976 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | About to run SSH command:
	I0729 14:37:52.332989 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | exit 0
	I0729 14:37:52.456152 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | SSH cmd err, output: <nil>: 
	I0729 14:37:52.456532 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetConfigRaw
	I0729 14:37:52.457156 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetIP
	I0729 14:37:52.459620 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.459946 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.459980 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.460200 1039263 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/config.json ...
	I0729 14:37:52.460384 1039263 machine.go:94] provisionDockerMachine start ...
	I0729 14:37:52.460404 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:52.460672 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:52.462798 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.463089 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.463119 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.463260 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:52.463428 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.463594 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.463703 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:52.463856 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:52.464071 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:52.464080 1039263 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:37:52.564925 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 14:37:52.564959 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetMachineName
	I0729 14:37:52.565214 1039263 buildroot.go:166] provisioning hostname "embed-certs-668123"
	I0729 14:37:52.565241 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetMachineName
	I0729 14:37:52.565472 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:52.568131 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.568450 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.568482 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.568615 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:52.568825 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.568975 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.569143 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:52.569335 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:52.569511 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:52.569522 1039263 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-668123 && echo "embed-certs-668123" | sudo tee /etc/hostname
	I0729 14:37:52.686424 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-668123
	
	I0729 14:37:52.686459 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:52.689074 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.689387 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.689422 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.689619 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:52.689825 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.689999 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.690164 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:52.690338 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:52.690511 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:52.690526 1039263 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-668123' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-668123/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-668123' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:37:52.801778 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:37:52.801812 1039263 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:37:52.801841 1039263 buildroot.go:174] setting up certificates
	I0729 14:37:52.801851 1039263 provision.go:84] configureAuth start
	I0729 14:37:52.801863 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetMachineName
	I0729 14:37:52.802133 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetIP
	I0729 14:37:52.804526 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.804877 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.804910 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.805053 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:52.807140 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.807369 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.807395 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.807505 1039263 provision.go:143] copyHostCerts
	I0729 14:37:52.807594 1039263 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:37:52.807608 1039263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:37:52.807698 1039263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:37:52.807840 1039263 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:37:52.807852 1039263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:37:52.807891 1039263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:37:52.807969 1039263 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:37:52.807979 1039263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:37:52.808011 1039263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:37:52.808084 1039263 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.embed-certs-668123 san=[127.0.0.1 192.168.50.53 embed-certs-668123 localhost minikube]
	I0729 14:37:53.007382 1039263 provision.go:177] copyRemoteCerts
	I0729 14:37:53.007459 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:37:53.007548 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.010097 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.010465 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.010488 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.010660 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.010864 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.011037 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.011193 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:37:53.092043 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 14:37:53.116737 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:37:53.139828 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:37:53.162813 1039263 provision.go:87] duration metric: took 360.943219ms to configureAuth
	I0729 14:37:53.162856 1039263 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:37:53.163051 1039263 config.go:182] Loaded profile config "embed-certs-668123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:37:53.163144 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.165757 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.166102 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.166130 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.166272 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.166465 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.166665 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.166817 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.166983 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:53.167154 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:53.167169 1039263 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:37:53.428217 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:37:53.428246 1039263 machine.go:97] duration metric: took 967.84942ms to provisionDockerMachine
	I0729 14:37:53.428258 1039263 start.go:293] postStartSetup for "embed-certs-668123" (driver="kvm2")
	I0729 14:37:53.428269 1039263 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:37:53.428298 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.428641 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:37:53.428669 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.431228 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.431593 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.431620 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.431797 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.431992 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.432159 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.432313 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:37:53.511226 1039263 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:37:53.515527 1039263 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:37:53.515555 1039263 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:37:53.515635 1039263 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:37:53.515724 1039263 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:37:53.515846 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:37:53.525606 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:37:53.548757 1039263 start.go:296] duration metric: took 120.484005ms for postStartSetup
	I0729 14:37:53.548798 1039263 fix.go:56] duration metric: took 19.371497305s for fixHost
	I0729 14:37:53.548827 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.551373 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.551697 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.551725 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.551866 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.552085 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.552226 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.552383 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.552574 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:53.552746 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:53.552756 1039263 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:37:53.653267 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263873.628230451
	
	I0729 14:37:53.653291 1039263 fix.go:216] guest clock: 1722263873.628230451
	I0729 14:37:53.653301 1039263 fix.go:229] Guest: 2024-07-29 14:37:53.628230451 +0000 UTC Remote: 2024-07-29 14:37:53.548802078 +0000 UTC m=+242.399919494 (delta=79.428373ms)
	I0729 14:37:53.653329 1039263 fix.go:200] guest clock delta is within tolerance: 79.428373ms
	I0729 14:37:53.653337 1039263 start.go:83] releasing machines lock for "embed-certs-668123", held for 19.476079428s
	I0729 14:37:53.653364 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.653673 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetIP
	I0729 14:37:53.656383 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.656805 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.656836 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.656958 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.657597 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.657831 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.657923 1039263 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:37:53.657981 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.658101 1039263 ssh_runner.go:195] Run: cat /version.json
	I0729 14:37:53.658129 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.660964 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.661044 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.661349 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.661374 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.661400 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.661446 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.661628 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.661711 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.661795 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.661918 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.662012 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.662092 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.662200 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:37:53.662234 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:37:53.764286 1039263 ssh_runner.go:195] Run: systemctl --version
	I0729 14:37:53.772936 1039263 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:37:53.922874 1039263 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:37:53.928953 1039263 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:37:53.929035 1039263 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:37:53.947388 1039263 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:37:53.947417 1039263 start.go:495] detecting cgroup driver to use...
	I0729 14:37:53.947496 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:37:53.964141 1039263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:37:53.985980 1039263 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:37:53.986042 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:37:54.009646 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:37:54.023449 1039263 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:37:54.139511 1039263 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:37:54.312559 1039263 docker.go:233] disabling docker service ...
	I0729 14:37:54.312655 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:37:54.327466 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:37:54.342225 1039263 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:37:54.485007 1039263 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:37:54.623987 1039263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:37:54.638100 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:37:54.658833 1039263 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 14:37:54.658911 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.670274 1039263 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:37:54.670366 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.681548 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.691626 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.701915 1039263 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:37:54.713399 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.723631 1039263 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.740625 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.751521 1039263 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:37:54.761895 1039263 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:37:54.761942 1039263 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:37:54.775663 1039263 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:37:54.785415 1039263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:37:54.933441 1039263 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:37:55.066449 1039263 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:37:55.066539 1039263 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:37:55.071614 1039263 start.go:563] Will wait 60s for crictl version
	I0729 14:37:55.071671 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:37:55.075727 1039263 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:37:55.117286 1039263 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:37:55.117395 1039263 ssh_runner.go:195] Run: crio --version
	I0729 14:37:55.145732 1039263 ssh_runner.go:195] Run: crio --version
	I0729 14:37:55.179714 1039263 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 14:37:55.181109 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetIP
	I0729 14:37:55.184274 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:55.184734 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:55.184761 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:55.185066 1039263 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 14:37:55.190374 1039263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:37:55.206768 1039263 kubeadm.go:883] updating cluster {Name:embed-certs-668123 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-668123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:37:55.207054 1039263 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 14:37:55.207130 1039263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:37:55.247814 1039263 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 14:37:55.247890 1039263 ssh_runner.go:195] Run: which lz4
	I0729 14:37:55.251992 1039263 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 14:37:55.256440 1039263 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 14:37:55.256468 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 14:37:53.678402 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Start
	I0729 14:37:53.678610 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Ensuring networks are active...
	I0729 14:37:53.679311 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Ensuring network default is active
	I0729 14:37:53.679767 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Ensuring network mk-default-k8s-diff-port-751306 is active
	I0729 14:37:53.680133 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Getting domain xml...
	I0729 14:37:53.680818 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Creating domain...
	I0729 14:37:54.024601 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting to get IP...
	I0729 14:37:54.025431 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.025838 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.025944 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:54.025837 1040438 retry.go:31] will retry after 280.254814ms: waiting for machine to come up
	I0729 14:37:54.307727 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.308260 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.308293 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:54.308220 1040438 retry.go:31] will retry after 384.348242ms: waiting for machine to come up
	I0729 14:37:54.693703 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.694304 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.694334 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:54.694251 1040438 retry.go:31] will retry after 417.76448ms: waiting for machine to come up
	I0729 14:37:55.113670 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:55.114243 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:55.114272 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:55.114191 1040438 retry.go:31] will retry after 589.741485ms: waiting for machine to come up
	I0729 14:37:55.706127 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:55.706613 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:55.706646 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:55.706569 1040438 retry.go:31] will retry after 471.427821ms: waiting for machine to come up
	I0729 14:37:56.179380 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:56.179867 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:56.179896 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:56.179814 1040438 retry.go:31] will retry after 624.275074ms: waiting for machine to come up
	I0729 14:37:56.805673 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:56.806042 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:56.806063 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:56.806018 1040438 retry.go:31] will retry after 1.027377333s: waiting for machine to come up
	I0729 14:37:56.743842 1039263 crio.go:462] duration metric: took 1.49188656s to copy over tarball
	I0729 14:37:56.743941 1039263 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 14:37:58.879573 1039263 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.135595087s)
	I0729 14:37:58.879619 1039263 crio.go:469] duration metric: took 2.135735155s to extract the tarball
	I0729 14:37:58.879628 1039263 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 14:37:58.916966 1039263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:37:58.958323 1039263 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 14:37:58.958349 1039263 cache_images.go:84] Images are preloaded, skipping loading
	I0729 14:37:58.958357 1039263 kubeadm.go:934] updating node { 192.168.50.53 8443 v1.30.3 crio true true} ...
	I0729 14:37:58.958537 1039263 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-668123 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-668123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:37:58.958632 1039263 ssh_runner.go:195] Run: crio config
	I0729 14:37:59.004120 1039263 cni.go:84] Creating CNI manager for ""
	I0729 14:37:59.004146 1039263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:37:59.004163 1039263 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:37:59.004192 1039263 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.53 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-668123 NodeName:embed-certs-668123 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 14:37:59.004371 1039263 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-668123"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:37:59.004469 1039263 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 14:37:59.014796 1039263 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:37:59.014866 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:37:59.024575 1039263 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0729 14:37:59.040707 1039263 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 14:37:59.056693 1039263 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0729 14:37:59.073320 1039263 ssh_runner.go:195] Run: grep 192.168.50.53	control-plane.minikube.internal$ /etc/hosts
	I0729 14:37:59.077226 1039263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:37:59.091283 1039263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:37:59.221532 1039263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:37:59.239319 1039263 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123 for IP: 192.168.50.53
	I0729 14:37:59.239362 1039263 certs.go:194] generating shared ca certs ...
	I0729 14:37:59.239387 1039263 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:37:59.239604 1039263 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:37:59.239654 1039263 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:37:59.239667 1039263 certs.go:256] generating profile certs ...
	I0729 14:37:59.239818 1039263 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/client.key
	I0729 14:37:59.239922 1039263 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/apiserver.key.544998fe
	I0729 14:37:59.239969 1039263 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/proxy-client.key
	I0729 14:37:59.240137 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:37:59.240188 1039263 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:37:59.240202 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:37:59.240238 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:37:59.240280 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:37:59.240313 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:37:59.240385 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:37:59.241551 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:37:59.278842 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:37:59.305668 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:37:59.332314 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:37:59.377867 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 14:37:59.405592 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 14:37:59.438073 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:37:59.462130 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 14:37:59.489158 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:37:59.511811 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:37:59.534728 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:37:59.558680 1039263 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:37:59.575404 1039263 ssh_runner.go:195] Run: openssl version
	I0729 14:37:59.581518 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:37:59.592024 1039263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:37:59.596913 1039263 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:37:59.596983 1039263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:37:59.602973 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:37:59.613891 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:37:59.624053 1039263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:37:59.628881 1039263 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:37:59.628922 1039263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:37:59.634672 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:37:59.645513 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:37:59.656385 1039263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:37:59.661141 1039263 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:37:59.661209 1039263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:37:59.667169 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:37:59.678240 1039263 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:37:59.683075 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:37:59.689013 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:37:59.694754 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:37:59.700865 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:37:59.706664 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:37:59.712457 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:37:59.718347 1039263 kubeadm.go:392] StartCluster: {Name:embed-certs-668123 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-668123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:37:59.718460 1039263 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:37:59.718505 1039263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:37:59.756046 1039263 cri.go:89] found id: ""
	I0729 14:37:59.756143 1039263 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:37:59.766198 1039263 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 14:37:59.766222 1039263 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 14:37:59.766278 1039263 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 14:37:59.775664 1039263 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:37:59.776877 1039263 kubeconfig.go:125] found "embed-certs-668123" server: "https://192.168.50.53:8443"
	I0729 14:37:59.778802 1039263 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 14:37:59.787805 1039263 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.53
	I0729 14:37:59.787840 1039263 kubeadm.go:1160] stopping kube-system containers ...
	I0729 14:37:59.787854 1039263 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 14:37:59.787908 1039263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:37:59.828927 1039263 cri.go:89] found id: ""
	I0729 14:37:59.829016 1039263 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 14:37:59.844889 1039263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:37:59.854233 1039263 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:37:59.854264 1039263 kubeadm.go:157] found existing configuration files:
	
	I0729 14:37:59.854334 1039263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:37:59.863123 1039263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:37:59.863183 1039263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:37:59.872154 1039263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:37:59.880819 1039263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:37:59.880881 1039263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:37:59.889714 1039263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:37:59.898278 1039263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:37:59.898330 1039263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:37:59.907358 1039263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:37:59.916352 1039263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:37:59.916430 1039263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:37:59.925239 1039263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:37:59.934353 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:00.045086 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:00.793783 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:01.009839 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:01.080217 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:01.153377 1039263 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:38:01.153496 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:37:57.835202 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:57.835636 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:57.835674 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:57.835572 1040438 retry.go:31] will retry after 987.763901ms: waiting for machine to come up
	I0729 14:37:58.824975 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:58.825428 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:58.825457 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:58.825348 1040438 retry.go:31] will retry after 1.189429393s: waiting for machine to come up
	I0729 14:38:00.016130 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:00.016569 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:38:00.016604 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:38:00.016509 1040438 retry.go:31] will retry after 1.424039091s: waiting for machine to come up
	I0729 14:38:01.443138 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:01.443511 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:38:01.443540 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:38:01.443470 1040438 retry.go:31] will retry after 2.531090823s: waiting for machine to come up
	I0729 14:38:01.653905 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:02.153772 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:02.653590 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:02.669429 1039263 api_server.go:72] duration metric: took 1.516051254s to wait for apiserver process to appear ...
	I0729 14:38:02.669467 1039263 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:38:02.669495 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:05.531413 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:38:05.531451 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:38:05.531467 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:05.602173 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:38:05.602205 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:38:05.670522 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:05.680835 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:05.680861 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:06.170512 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:06.176052 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:06.176084 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:06.669679 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:06.674813 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:06.674854 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:07.170539 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:07.174573 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 200:
	ok
	I0729 14:38:07.180250 1039263 api_server.go:141] control plane version: v1.30.3
	I0729 14:38:07.180275 1039263 api_server.go:131] duration metric: took 4.510799806s to wait for apiserver health ...
	I0729 14:38:07.180284 1039263 cni.go:84] Creating CNI manager for ""
	I0729 14:38:07.180290 1039263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:38:07.181866 1039263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:38:03.976004 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:03.976514 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:38:03.976544 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:38:03.976474 1040438 retry.go:31] will retry after 3.356304099s: waiting for machine to come up
	I0729 14:38:07.335600 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:07.336031 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:38:07.336086 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:38:07.335992 1040438 retry.go:31] will retry after 3.345416128s: waiting for machine to come up
	I0729 14:38:07.182966 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:38:07.193166 1039263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:38:07.212801 1039263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:38:07.221297 1039263 system_pods.go:59] 8 kube-system pods found
	I0729 14:38:07.221331 1039263 system_pods.go:61] "coredns-7db6d8ff4d-6dhzz" [c680e565-fe93-4072-8fe8-6fd440ae5675] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 14:38:07.221340 1039263 system_pods.go:61] "etcd-embed-certs-668123" [3244d6a8-3aa2-406a-86fe-9770f5b8541a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 14:38:07.221347 1039263 system_pods.go:61] "kube-apiserver-embed-certs-668123" [a00570e4-b496-4083-b280-4125643e475e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 14:38:07.221352 1039263 system_pods.go:61] "kube-controller-manager-embed-certs-668123" [cec685e1-4d5f-4210-a115-e3766c962f07] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 14:38:07.221364 1039263 system_pods.go:61] "kube-proxy-2v79q" [e43e850d-b94e-467c-bf0f-0eac3828f54f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 14:38:07.221370 1039263 system_pods.go:61] "kube-scheduler-embed-certs-668123" [4037d948-faed-49c9-b321-6a4be51b9ea9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 14:38:07.221379 1039263 system_pods.go:61] "metrics-server-569cc877fc-5msnp" [eb9cd6f7-caf5-4b18-b0d6-0f01add839ce] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:38:07.221384 1039263 system_pods.go:61] "storage-provisioner" [ecdab0df-406c-4f3c-b8fe-34a48b7f1e0a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 14:38:07.221390 1039263 system_pods.go:74] duration metric: took 8.574498ms to wait for pod list to return data ...
	I0729 14:38:07.221397 1039263 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:38:07.224197 1039263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:38:07.224220 1039263 node_conditions.go:123] node cpu capacity is 2
	I0729 14:38:07.224231 1039263 node_conditions.go:105] duration metric: took 2.829585ms to run NodePressure ...
	I0729 14:38:07.224246 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:07.520049 1039263 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 14:38:07.524228 1039263 kubeadm.go:739] kubelet initialised
	I0729 14:38:07.524251 1039263 kubeadm.go:740] duration metric: took 4.174563ms waiting for restarted kubelet to initialise ...
	I0729 14:38:07.524262 1039263 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:38:07.529174 1039263 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:07.533534 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.533554 1039263 pod_ready.go:81] duration metric: took 4.355926ms for pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:07.533562 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.533567 1039263 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:07.537529 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "etcd-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.537550 1039263 pod_ready.go:81] duration metric: took 3.975082ms for pod "etcd-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:07.537561 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "etcd-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.537567 1039263 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:07.542299 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.542325 1039263 pod_ready.go:81] duration metric: took 4.747863ms for pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:07.542371 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.542390 1039263 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:07.616688 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.616725 1039263 pod_ready.go:81] duration metric: took 74.323327ms for pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:07.616740 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.616750 1039263 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2v79q" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:08.016334 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "kube-proxy-2v79q" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.016360 1039263 pod_ready.go:81] duration metric: took 399.599984ms for pod "kube-proxy-2v79q" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:08.016369 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "kube-proxy-2v79q" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.016374 1039263 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:08.416536 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.416571 1039263 pod_ready.go:81] duration metric: took 400.189243ms for pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:08.416585 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.416594 1039263 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:08.817526 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.817561 1039263 pod_ready.go:81] duration metric: took 400.956263ms for pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:08.817572 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.817590 1039263 pod_ready.go:38] duration metric: took 1.293313082s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:38:08.817610 1039263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 14:38:08.829394 1039263 ops.go:34] apiserver oom_adj: -16
	I0729 14:38:08.829425 1039263 kubeadm.go:597] duration metric: took 9.06319609s to restartPrimaryControlPlane
	I0729 14:38:08.829436 1039263 kubeadm.go:394] duration metric: took 9.111098315s to StartCluster
	I0729 14:38:08.829457 1039263 settings.go:142] acquiring lock: {Name:mke61e73d7bb1a5bd9c2f4c9e9bba0a07b199ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:08.829548 1039263 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:38:08.831113 1039263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:08.831396 1039263 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:38:08.831441 1039263 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 14:38:08.831524 1039263 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-668123"
	I0729 14:38:08.831539 1039263 addons.go:69] Setting default-storageclass=true in profile "embed-certs-668123"
	I0729 14:38:08.831562 1039263 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-668123"
	W0729 14:38:08.831572 1039263 addons.go:243] addon storage-provisioner should already be in state true
	I0729 14:38:08.831561 1039263 addons.go:69] Setting metrics-server=true in profile "embed-certs-668123"
	I0729 14:38:08.831593 1039263 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-668123"
	I0729 14:38:08.831601 1039263 addons.go:234] Setting addon metrics-server=true in "embed-certs-668123"
	I0729 14:38:08.831609 1039263 host.go:66] Checking if "embed-certs-668123" exists ...
	W0729 14:38:08.831610 1039263 addons.go:243] addon metrics-server should already be in state true
	I0729 14:38:08.831632 1039263 config.go:182] Loaded profile config "embed-certs-668123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:38:08.831644 1039263 host.go:66] Checking if "embed-certs-668123" exists ...
	I0729 14:38:08.831916 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.831933 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.831918 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.831956 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.831945 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.831964 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.833223 1039263 out.go:177] * Verifying Kubernetes components...
	I0729 14:38:08.834403 1039263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:08.847231 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I0729 14:38:08.847362 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37467
	I0729 14:38:08.847398 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44737
	I0729 14:38:08.847797 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.847896 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.847904 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.848350 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.848371 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.848487 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.848507 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.848520 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.848540 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.848774 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.848854 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.848867 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.849010 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:38:08.849363 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.849363 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.849392 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.849416 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.851933 1039263 addons.go:234] Setting addon default-storageclass=true in "embed-certs-668123"
	W0729 14:38:08.851955 1039263 addons.go:243] addon default-storageclass should already be in state true
	I0729 14:38:08.851988 1039263 host.go:66] Checking if "embed-certs-668123" exists ...
	I0729 14:38:08.852284 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.852330 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.865255 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34389
	I0729 14:38:08.865707 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.865981 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36925
	I0729 14:38:08.866157 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.866183 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.866419 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.866531 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.866804 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:38:08.866895 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.866920 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.867272 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.867839 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.867885 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.868000 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46413
	I0729 14:38:08.868397 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.868741 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:38:08.868886 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.868903 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.869276 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.869501 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:38:08.870835 1039263 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 14:38:08.871289 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:38:08.872267 1039263 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 14:38:08.872289 1039263 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 14:38:08.872306 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:38:08.873165 1039263 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:08.874593 1039263 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:38:08.874616 1039263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 14:38:08.874635 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:38:08.875061 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.875572 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:38:08.875605 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.875815 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:38:08.876012 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:38:08.876208 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:38:08.876370 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:38:08.877997 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.878394 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:38:08.878423 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.878555 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:38:08.878726 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:38:08.878889 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:38:08.879002 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:38:08.890720 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I0729 14:38:08.891092 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.891619 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.891638 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.891972 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.892184 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:38:08.893577 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:38:08.893817 1039263 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 14:38:08.893840 1039263 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 14:38:08.893859 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:38:08.896843 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.897302 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:38:08.897320 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.897464 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:38:08.897618 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:38:08.897866 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:38:08.897966 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:38:09.019001 1039263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:38:09.038038 1039263 node_ready.go:35] waiting up to 6m0s for node "embed-certs-668123" to be "Ready" ...
	I0729 14:38:09.097896 1039263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:38:09.101844 1039263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 14:38:09.229339 1039263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 14:38:09.229360 1039263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 14:38:09.317591 1039263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 14:38:09.317625 1039263 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 14:38:09.370444 1039263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:38:09.370490 1039263 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 14:38:09.407869 1039263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:38:10.014739 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.014767 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.014873 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.014897 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.015112 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Closing plugin on server side
	I0729 14:38:10.015150 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.015157 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.015166 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.015174 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.015284 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.015297 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.015306 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.015313 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.015384 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.015413 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.015395 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Closing plugin on server side
	I0729 14:38:10.015611 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.015641 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.024010 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.024031 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.024299 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.024318 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.024343 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Closing plugin on server side
	I0729 14:38:10.233873 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.233903 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.234247 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Closing plugin on server side
	I0729 14:38:10.234260 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.234275 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.234292 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.234301 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.234546 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.234563 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.234574 1039263 addons.go:475] Verifying addon metrics-server=true in "embed-certs-668123"
	I0729 14:38:10.236215 1039263 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 14:38:10.237377 1039263 addons.go:510] duration metric: took 1.405942032s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 14:38:11.042263 1039263 node_ready.go:53] node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:12.129080 1039759 start.go:364] duration metric: took 3m18.14725367s to acquireMachinesLock for "old-k8s-version-360866"
	I0729 14:38:12.129155 1039759 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:38:12.129166 1039759 fix.go:54] fixHost starting: 
	I0729 14:38:12.129715 1039759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:12.129752 1039759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:12.146596 1039759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34517
	I0729 14:38:12.147101 1039759 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:12.147554 1039759 main.go:141] libmachine: Using API Version  1
	I0729 14:38:12.147581 1039759 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:12.147871 1039759 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:12.148094 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:12.148293 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetState
	I0729 14:38:12.149880 1039759 fix.go:112] recreateIfNeeded on old-k8s-version-360866: state=Stopped err=<nil>
	I0729 14:38:12.149918 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	W0729 14:38:12.150120 1039759 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:38:12.152003 1039759 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-360866" ...
	I0729 14:38:10.683699 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.684108 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Found IP for machine: 192.168.72.233
	I0729 14:38:10.684148 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has current primary IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.684161 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Reserving static IP address...
	I0729 14:38:10.684506 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-751306", mac: "52:54:00:9f:b9:23", ip: "192.168.72.233"} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.684540 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | skip adding static IP to network mk-default-k8s-diff-port-751306 - found existing host DHCP lease matching {name: "default-k8s-diff-port-751306", mac: "52:54:00:9f:b9:23", ip: "192.168.72.233"}
	I0729 14:38:10.684558 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Reserved static IP address: 192.168.72.233
	I0729 14:38:10.684581 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for SSH to be available...
	I0729 14:38:10.684600 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Getting to WaitForSSH function...
	I0729 14:38:10.686336 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.686684 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.686713 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.686825 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Using SSH client type: external
	I0729 14:38:10.686856 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa (-rw-------)
	I0729 14:38:10.686894 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.233 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:38:10.686904 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | About to run SSH command:
	I0729 14:38:10.686921 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | exit 0
	I0729 14:38:10.808536 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | SSH cmd err, output: <nil>: 
	I0729 14:38:10.808965 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetConfigRaw
	I0729 14:38:10.809613 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetIP
	I0729 14:38:10.812200 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.812590 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.812625 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.812862 1039440 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/config.json ...
	I0729 14:38:10.813089 1039440 machine.go:94] provisionDockerMachine start ...
	I0729 14:38:10.813110 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:10.813322 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:10.815607 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.815933 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.815962 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.816113 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:10.816287 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:10.816450 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:10.816623 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:10.816838 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:10.817167 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:10.817184 1039440 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:38:10.916864 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 14:38:10.916908 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetMachineName
	I0729 14:38:10.917215 1039440 buildroot.go:166] provisioning hostname "default-k8s-diff-port-751306"
	I0729 14:38:10.917249 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetMachineName
	I0729 14:38:10.917478 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:10.919961 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.920339 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.920363 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.920471 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:10.920660 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:10.920842 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:10.920991 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:10.921145 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:10.921358 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:10.921377 1039440 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-751306 && echo "default-k8s-diff-port-751306" | sudo tee /etc/hostname
	I0729 14:38:11.034826 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-751306
	
	I0729 14:38:11.034859 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.037494 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.037836 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.037870 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.038068 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:11.038274 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.038410 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.038575 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:11.038736 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:11.038971 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:11.038998 1039440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-751306' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-751306/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-751306' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:38:11.146350 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:38:11.146391 1039440 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:38:11.146449 1039440 buildroot.go:174] setting up certificates
	I0729 14:38:11.146463 1039440 provision.go:84] configureAuth start
	I0729 14:38:11.146478 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetMachineName
	I0729 14:38:11.146842 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetIP
	I0729 14:38:11.149492 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.149766 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.149796 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.149927 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.152449 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.152735 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.152785 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.152956 1039440 provision.go:143] copyHostCerts
	I0729 14:38:11.153010 1039440 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:38:11.153021 1039440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:38:11.153074 1039440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:38:11.153172 1039440 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:38:11.153180 1039440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:38:11.153198 1039440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:38:11.153253 1039440 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:38:11.153260 1039440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:38:11.153276 1039440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:38:11.153324 1039440 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-751306 san=[127.0.0.1 192.168.72.233 default-k8s-diff-port-751306 localhost minikube]
	I0729 14:38:11.489907 1039440 provision.go:177] copyRemoteCerts
	I0729 14:38:11.489990 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:38:11.490028 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.492487 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.492801 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.492832 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.492992 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:11.493220 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.493467 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:11.493611 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:38:11.574475 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:38:11.598182 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:38:11.622809 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 14:38:11.646533 1039440 provision.go:87] duration metric: took 500.054139ms to configureAuth
	I0729 14:38:11.646563 1039440 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:38:11.646742 1039440 config.go:182] Loaded profile config "default-k8s-diff-port-751306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:38:11.646822 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.649260 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.649581 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.649616 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.649729 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:11.649934 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.650088 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.650274 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:11.650436 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:11.650610 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:11.650628 1039440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:38:11.906877 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:38:11.906918 1039440 machine.go:97] duration metric: took 1.093811728s to provisionDockerMachine
	I0729 14:38:11.906936 1039440 start.go:293] postStartSetup for "default-k8s-diff-port-751306" (driver="kvm2")
	I0729 14:38:11.906951 1039440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:38:11.906977 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:11.907366 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:38:11.907407 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.910366 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.910725 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.910748 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.910913 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:11.911162 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.911323 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:11.911529 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:38:11.992133 1039440 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:38:11.996426 1039440 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:38:11.996456 1039440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:38:11.996544 1039440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:38:11.996641 1039440 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:38:11.996747 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:38:12.006165 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:12.029591 1039440 start.go:296] duration metric: took 122.613174ms for postStartSetup
	I0729 14:38:12.029643 1039440 fix.go:56] duration metric: took 18.376148792s for fixHost
	I0729 14:38:12.029670 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:12.032299 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.032667 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:12.032731 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.032901 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:12.033104 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:12.033260 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:12.033372 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:12.033510 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:12.033679 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:12.033688 1039440 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:38:12.128889 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263892.107886376
	
	I0729 14:38:12.128917 1039440 fix.go:216] guest clock: 1722263892.107886376
	I0729 14:38:12.128926 1039440 fix.go:229] Guest: 2024-07-29 14:38:12.107886376 +0000 UTC Remote: 2024-07-29 14:38:12.029648961 +0000 UTC m=+239.632909800 (delta=78.237415ms)
	I0729 14:38:12.128955 1039440 fix.go:200] guest clock delta is within tolerance: 78.237415ms
	I0729 14:38:12.128961 1039440 start.go:83] releasing machines lock for "default-k8s-diff-port-751306", held for 18.475504041s
	I0729 14:38:12.128995 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:12.129301 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetIP
	I0729 14:38:12.132025 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.132374 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:12.132401 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.132566 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:12.133087 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:12.133273 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:12.133349 1039440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:38:12.133404 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:12.133513 1039440 ssh_runner.go:195] Run: cat /version.json
	I0729 14:38:12.133534 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:12.136121 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.136149 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.136523 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:12.136577 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:12.136607 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.136624 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.136716 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:12.136793 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:12.136917 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:12.136973 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:12.137088 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:12.137165 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:12.137292 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:38:12.137232 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:38:12.233842 1039440 ssh_runner.go:195] Run: systemctl --version
	I0729 14:38:12.240082 1039440 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:38:12.388404 1039440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:38:12.395038 1039440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:38:12.395127 1039440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:38:12.416590 1039440 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:38:12.416618 1039440 start.go:495] detecting cgroup driver to use...
	I0729 14:38:12.416682 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:38:12.437863 1039440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:38:12.453458 1039440 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:38:12.453508 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:38:12.467657 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:38:12.482328 1039440 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:38:12.610786 1039440 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:38:12.774787 1039440 docker.go:233] disabling docker service ...
	I0729 14:38:12.774861 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:38:12.790091 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:38:12.803914 1039440 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:38:12.933894 1039440 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:38:13.052159 1039440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:38:13.069309 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:38:13.089959 1039440 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 14:38:13.090014 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.102668 1039440 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:38:13.102741 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.113634 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.124374 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.135488 1039440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:38:13.147171 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.159757 1039440 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.178620 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.189326 1039440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:38:13.200007 1039440 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:38:13.200067 1039440 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:38:13.213063 1039440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:38:13.226044 1039440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:13.360685 1039440 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:38:13.508473 1039440 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:38:13.508556 1039440 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:38:13.513547 1039440 start.go:563] Will wait 60s for crictl version
	I0729 14:38:13.513619 1039440 ssh_runner.go:195] Run: which crictl
	I0729 14:38:13.518528 1039440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:38:13.567103 1039440 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:38:13.567180 1039440 ssh_runner.go:195] Run: crio --version
	I0729 14:38:13.603837 1039440 ssh_runner.go:195] Run: crio --version
	I0729 14:38:13.633583 1039440 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 14:38:12.153214 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .Start
	I0729 14:38:12.153408 1039759 main.go:141] libmachine: (old-k8s-version-360866) Ensuring networks are active...
	I0729 14:38:12.154141 1039759 main.go:141] libmachine: (old-k8s-version-360866) Ensuring network default is active
	I0729 14:38:12.154590 1039759 main.go:141] libmachine: (old-k8s-version-360866) Ensuring network mk-old-k8s-version-360866 is active
	I0729 14:38:12.154970 1039759 main.go:141] libmachine: (old-k8s-version-360866) Getting domain xml...
	I0729 14:38:12.155733 1039759 main.go:141] libmachine: (old-k8s-version-360866) Creating domain...
	I0729 14:38:12.526504 1039759 main.go:141] libmachine: (old-k8s-version-360866) Waiting to get IP...
	I0729 14:38:12.527560 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:12.528068 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:12.528147 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:12.528048 1040622 retry.go:31] will retry after 240.079974ms: waiting for machine to come up
	I0729 14:38:12.769388 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:12.769881 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:12.769910 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:12.769829 1040622 retry.go:31] will retry after 271.200632ms: waiting for machine to come up
	I0729 14:38:13.042584 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:13.043069 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:13.043101 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:13.043049 1040622 retry.go:31] will retry after 464.725959ms: waiting for machine to come up
	I0729 14:38:13.509830 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:13.510400 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:13.510434 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:13.510350 1040622 retry.go:31] will retry after 416.316047ms: waiting for machine to come up
	I0729 14:38:13.042877 1039263 node_ready.go:53] node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:15.051282 1039263 node_ready.go:53] node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:13.635092 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetIP
	I0729 14:38:13.638202 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:13.638665 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:13.638691 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:13.638933 1039440 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 14:38:13.642960 1039440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:13.656098 1039440 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-751306 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-751306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.233 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:38:13.656208 1039440 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 14:38:13.656255 1039440 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:13.697398 1039440 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 14:38:13.697475 1039440 ssh_runner.go:195] Run: which lz4
	I0729 14:38:13.701632 1039440 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 14:38:13.707129 1039440 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 14:38:13.707162 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 14:38:15.218414 1039440 crio.go:462] duration metric: took 1.516807674s to copy over tarball
	I0729 14:38:15.218505 1039440 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 14:38:13.927885 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:13.928343 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:13.928373 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:13.928307 1040622 retry.go:31] will retry after 659.670364ms: waiting for machine to come up
	I0729 14:38:14.589644 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:14.590143 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:14.590172 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:14.590031 1040622 retry.go:31] will retry after 738.020335ms: waiting for machine to come up
	I0729 14:38:15.330093 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:15.330603 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:15.330633 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:15.330553 1040622 retry.go:31] will retry after 1.13067902s: waiting for machine to come up
	I0729 14:38:16.462554 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:16.463002 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:16.463031 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:16.462977 1040622 retry.go:31] will retry after 1.342785853s: waiting for machine to come up
	I0729 14:38:17.806889 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:17.807333 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:17.807365 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:17.807266 1040622 retry.go:31] will retry after 1.804812934s: waiting for machine to come up
	I0729 14:38:16.550848 1039263 node_ready.go:49] node "embed-certs-668123" has status "Ready":"True"
	I0729 14:38:16.550880 1039263 node_ready.go:38] duration metric: took 7.512808712s for node "embed-certs-668123" to be "Ready" ...
	I0729 14:38:16.550895 1039263 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:38:16.563220 1039263 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:16.570054 1039263 pod_ready.go:92] pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:16.570080 1039263 pod_ready.go:81] duration metric: took 6.832939ms for pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:16.570091 1039263 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:19.207981 1039263 pod_ready.go:102] pod "etcd-embed-certs-668123" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:17.498961 1039440 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.280415291s)
	I0729 14:38:17.498997 1039440 crio.go:469] duration metric: took 2.280548689s to extract the tarball
	I0729 14:38:17.499008 1039440 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 14:38:17.537972 1039440 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:17.583582 1039440 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 14:38:17.583609 1039440 cache_images.go:84] Images are preloaded, skipping loading
	I0729 14:38:17.583617 1039440 kubeadm.go:934] updating node { 192.168.72.233 8444 v1.30.3 crio true true} ...
	I0729 14:38:17.583719 1039440 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-751306 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-751306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:38:17.583789 1039440 ssh_runner.go:195] Run: crio config
	I0729 14:38:17.637202 1039440 cni.go:84] Creating CNI manager for ""
	I0729 14:38:17.637230 1039440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:38:17.637243 1039440 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:38:17.637272 1039440 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.233 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-751306 NodeName:default-k8s-diff-port-751306 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.233 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 14:38:17.637451 1039440 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.233
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-751306"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.233
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.233"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:38:17.637528 1039440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 14:38:17.650173 1039440 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:38:17.650259 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:38:17.661790 1039440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0729 14:38:17.680720 1039440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 14:38:17.700420 1039440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0729 14:38:17.723134 1039440 ssh_runner.go:195] Run: grep 192.168.72.233	control-plane.minikube.internal$ /etc/hosts
	I0729 14:38:17.727510 1039440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.233	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:17.741033 1039440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:17.889833 1039440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:38:17.910486 1039440 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306 for IP: 192.168.72.233
	I0729 14:38:17.910540 1039440 certs.go:194] generating shared ca certs ...
	I0729 14:38:17.910565 1039440 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:17.910763 1039440 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:38:17.910821 1039440 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:38:17.910833 1039440 certs.go:256] generating profile certs ...
	I0729 14:38:17.910941 1039440 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/client.key
	I0729 14:38:17.911003 1039440 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/apiserver.key.811a3f6d
	I0729 14:38:17.911105 1039440 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/proxy-client.key
	I0729 14:38:17.911271 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:38:17.911315 1039440 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:38:17.911329 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:38:17.911362 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:38:17.911393 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:38:17.911426 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:38:17.911478 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:17.912301 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:38:17.948102 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:38:17.984122 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:38:18.019932 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:38:18.062310 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 14:38:18.093176 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 14:38:18.124016 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:38:18.151933 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 14:38:18.179714 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:38:18.203414 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:38:18.233286 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:38:18.262871 1039440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:38:18.283064 1039440 ssh_runner.go:195] Run: openssl version
	I0729 14:38:18.289016 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:38:18.299409 1039440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:38:18.304053 1039440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:38:18.304115 1039440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:38:18.309976 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:38:18.321472 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:38:18.331916 1039440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:38:18.336822 1039440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:38:18.336881 1039440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:38:18.342762 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:38:18.353478 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:38:18.364299 1039440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:18.369024 1039440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:18.369076 1039440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:18.376534 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:38:18.387360 1039440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:38:18.392392 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:38:18.398520 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:38:18.404397 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:38:18.410922 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:38:18.417193 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:38:18.423808 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:38:18.433345 1039440 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-751306 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-751306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.233 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:38:18.433463 1039440 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:38:18.433582 1039440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:38:18.476749 1039440 cri.go:89] found id: ""
	I0729 14:38:18.476834 1039440 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:38:18.488548 1039440 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 14:38:18.488570 1039440 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 14:38:18.488628 1039440 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 14:38:18.499081 1039440 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:38:18.500064 1039440 kubeconfig.go:125] found "default-k8s-diff-port-751306" server: "https://192.168.72.233:8444"
	I0729 14:38:18.502130 1039440 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 14:38:18.511589 1039440 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.233
	I0729 14:38:18.511631 1039440 kubeadm.go:1160] stopping kube-system containers ...
	I0729 14:38:18.511646 1039440 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 14:38:18.511698 1039440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:38:18.559691 1039440 cri.go:89] found id: ""
	I0729 14:38:18.559779 1039440 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 14:38:18.576217 1039440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:38:18.586031 1039440 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:38:18.586057 1039440 kubeadm.go:157] found existing configuration files:
	
	I0729 14:38:18.586110 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 14:38:18.595032 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:38:18.595096 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:38:18.604320 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 14:38:18.613996 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:38:18.614053 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:38:18.623345 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 14:38:18.631898 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:38:18.631943 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:38:18.641303 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 14:38:18.649849 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:38:18.649907 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:38:18.659657 1039440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:38:18.668914 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:18.782351 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:19.902413 1039440 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.120025721s)
	I0729 14:38:19.902451 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:20.120455 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:20.206099 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:20.293738 1039440 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:38:20.293850 1039440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:20.794840 1039440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:21.294958 1039440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:21.313567 1039440 api_server.go:72] duration metric: took 1.019826572s to wait for apiserver process to appear ...
	I0729 14:38:21.313600 1039440 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:38:21.313625 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:21.314152 1039440 api_server.go:269] stopped: https://192.168.72.233:8444/healthz: Get "https://192.168.72.233:8444/healthz": dial tcp 192.168.72.233:8444: connect: connection refused
	I0729 14:38:21.813935 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:19.613474 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:19.613801 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:19.613830 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:19.613749 1040622 retry.go:31] will retry after 1.449593132s: waiting for machine to come up
	I0729 14:38:21.064774 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:21.065382 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:21.065405 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:21.065314 1040622 retry.go:31] will retry after 1.807508073s: waiting for machine to come up
	I0729 14:38:22.874485 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:22.874896 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:22.874925 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:22.874844 1040622 retry.go:31] will retry after 3.036719557s: waiting for machine to come up
	I0729 14:38:21.578125 1039263 pod_ready.go:92] pod "etcd-embed-certs-668123" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.578152 1039263 pod_ready.go:81] duration metric: took 5.008051755s for pod "etcd-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.578164 1039263 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.584521 1039263 pod_ready.go:92] pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.584544 1039263 pod_ready.go:81] duration metric: took 6.372252ms for pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.584558 1039263 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.590245 1039263 pod_ready.go:92] pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.590269 1039263 pod_ready.go:81] duration metric: took 5.702853ms for pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.590280 1039263 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2v79q" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.594576 1039263 pod_ready.go:92] pod "kube-proxy-2v79q" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.594602 1039263 pod_ready.go:81] duration metric: took 4.314692ms for pod "kube-proxy-2v79q" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.594614 1039263 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.787339 1039263 pod_ready.go:92] pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.787379 1039263 pod_ready.go:81] duration metric: took 192.756548ms for pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.787399 1039263 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:23.795588 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:24.561135 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:38:24.561176 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:38:24.561195 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:24.635519 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:24.635550 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:24.813755 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:24.817972 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:24.818003 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:25.314643 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:25.320059 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:25.320094 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:25.814758 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:25.820578 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:25.820613 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:26.314798 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:26.319346 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:26.319384 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:26.813907 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:26.821176 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:26.821208 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:27.314614 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:27.319335 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:27.319361 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:27.814188 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:27.819010 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 200:
	ok
	I0729 14:38:27.826057 1039440 api_server.go:141] control plane version: v1.30.3
	I0729 14:38:27.826082 1039440 api_server.go:131] duration metric: took 6.512474877s to wait for apiserver health ...
	I0729 14:38:27.826091 1039440 cni.go:84] Creating CNI manager for ""
	I0729 14:38:27.826098 1039440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:38:27.827698 1039440 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:38:25.913642 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:25.914139 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:25.914166 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:25.914099 1040622 retry.go:31] will retry after 3.839238383s: waiting for machine to come up
	I0729 14:38:26.293618 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:28.294115 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:30.296010 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:31.361688 1038758 start.go:364] duration metric: took 52.182622805s to acquireMachinesLock for "no-preload-603534"
	I0729 14:38:31.361756 1038758 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:38:31.361765 1038758 fix.go:54] fixHost starting: 
	I0729 14:38:31.362279 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:31.362319 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:31.380259 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34959
	I0729 14:38:31.380783 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:31.381320 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:38:31.381349 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:31.381649 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:31.381848 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:31.381989 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:38:31.383537 1038758 fix.go:112] recreateIfNeeded on no-preload-603534: state=Stopped err=<nil>
	I0729 14:38:31.383561 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	W0729 14:38:31.383739 1038758 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:38:31.385496 1038758 out.go:177] * Restarting existing kvm2 VM for "no-preload-603534" ...
	I0729 14:38:31.386878 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Start
	I0729 14:38:31.387071 1038758 main.go:141] libmachine: (no-preload-603534) Ensuring networks are active...
	I0729 14:38:31.387821 1038758 main.go:141] libmachine: (no-preload-603534) Ensuring network default is active
	I0729 14:38:31.388141 1038758 main.go:141] libmachine: (no-preload-603534) Ensuring network mk-no-preload-603534 is active
	I0729 14:38:31.388649 1038758 main.go:141] libmachine: (no-preload-603534) Getting domain xml...
	I0729 14:38:31.391807 1038758 main.go:141] libmachine: (no-preload-603534) Creating domain...
	I0729 14:38:27.829109 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:38:27.839810 1039440 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:38:27.858724 1039440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:38:27.868075 1039440 system_pods.go:59] 8 kube-system pods found
	I0729 14:38:27.868112 1039440 system_pods.go:61] "coredns-7db6d8ff4d-m6dlw" [7ce45b48-f04d-4167-8a6e-643b2fb3c4f0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 14:38:27.868121 1039440 system_pods.go:61] "etcd-default-k8s-diff-port-751306" [7ccadfd7-8b68-45c0-9670-af97b90d35d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 14:38:27.868128 1039440 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-751306" [5e8c8e17-28db-499c-a940-e67d92b28bfd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 14:38:27.868134 1039440 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-751306" [a2d31d58-d8d9-4070-96af-0d1af763d0b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 14:38:27.868140 1039440 system_pods.go:61] "kube-proxy-p6dv5" [c44edf0a-f608-49f2-ab53-7ffbcdf13b5e] Running
	I0729 14:38:27.868146 1039440 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-751306" [b87ee044-f43f-4aa7-94b3-4f2ad2213ce9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 14:38:27.868152 1039440 system_pods.go:61] "metrics-server-569cc877fc-gmz64" [296e883c-7394-4004-a25f-e93b4be52d46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:38:27.868156 1039440 system_pods.go:61] "storage-provisioner" [ec3b78f1-96a3-47b2-958d-82258a074634] Running
	I0729 14:38:27.868165 1039440 system_pods.go:74] duration metric: took 9.405484ms to wait for pod list to return data ...
	I0729 14:38:27.868173 1039440 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:38:27.871538 1039440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:38:27.871563 1039440 node_conditions.go:123] node cpu capacity is 2
	I0729 14:38:27.871575 1039440 node_conditions.go:105] duration metric: took 3.397306ms to run NodePressure ...
	I0729 14:38:27.871596 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:28.143890 1039440 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 14:38:28.148855 1039440 kubeadm.go:739] kubelet initialised
	I0729 14:38:28.148880 1039440 kubeadm.go:740] duration metric: took 4.952057ms waiting for restarted kubelet to initialise ...
	I0729 14:38:28.148891 1039440 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:38:28.154636 1039440 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-m6dlw" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:30.161265 1039440 pod_ready.go:102] pod "coredns-7db6d8ff4d-m6dlw" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:31.161979 1039440 pod_ready.go:92] pod "coredns-7db6d8ff4d-m6dlw" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:31.162005 1039440 pod_ready.go:81] duration metric: took 3.007344998s for pod "coredns-7db6d8ff4d-m6dlw" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:31.162015 1039440 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:29.755060 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.755512 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has current primary IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.755524 1039759 main.go:141] libmachine: (old-k8s-version-360866) Found IP for machine: 192.168.39.71
	I0729 14:38:29.755536 1039759 main.go:141] libmachine: (old-k8s-version-360866) Reserving static IP address...
	I0729 14:38:29.755975 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "old-k8s-version-360866", mac: "52:54:00:18:de:25", ip: "192.168.39.71"} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.756008 1039759 main.go:141] libmachine: (old-k8s-version-360866) Reserved static IP address: 192.168.39.71
	I0729 14:38:29.756035 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | skip adding static IP to network mk-old-k8s-version-360866 - found existing host DHCP lease matching {name: "old-k8s-version-360866", mac: "52:54:00:18:de:25", ip: "192.168.39.71"}
	I0729 14:38:29.756048 1039759 main.go:141] libmachine: (old-k8s-version-360866) Waiting for SSH to be available...
	I0729 14:38:29.756067 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | Getting to WaitForSSH function...
	I0729 14:38:29.758527 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.758899 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.758944 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.759003 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | Using SSH client type: external
	I0729 14:38:29.759024 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa (-rw-------)
	I0729 14:38:29.759058 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.71 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:38:29.759070 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | About to run SSH command:
	I0729 14:38:29.759083 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | exit 0
	I0729 14:38:29.884425 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | SSH cmd err, output: <nil>: 
	I0729 14:38:29.884833 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetConfigRaw
	I0729 14:38:29.885450 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:29.887929 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.888241 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.888294 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.888624 1039759 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/config.json ...
	I0729 14:38:29.888895 1039759 machine.go:94] provisionDockerMachine start ...
	I0729 14:38:29.888919 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:29.889221 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:29.891654 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.892013 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.892038 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.892163 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:29.892350 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.892598 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.892764 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:29.892968 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:29.893158 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:29.893169 1039759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:38:29.993529 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 14:38:29.993564 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetMachineName
	I0729 14:38:29.993859 1039759 buildroot.go:166] provisioning hostname "old-k8s-version-360866"
	I0729 14:38:29.993893 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetMachineName
	I0729 14:38:29.994074 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:29.996882 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.997279 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.997308 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.997537 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:29.997699 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.997856 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.997976 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:29.998206 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:29.998412 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:29.998429 1039759 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-360866 && echo "old-k8s-version-360866" | sudo tee /etc/hostname
	I0729 14:38:30.115298 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-360866
	
	I0729 14:38:30.115331 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.118349 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.118763 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.118793 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.119029 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:30.119203 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.119356 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.119561 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:30.119772 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:30.119976 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:30.120019 1039759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-360866' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-360866/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-360866' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:38:30.229987 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:38:30.230017 1039759 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:38:30.230059 1039759 buildroot.go:174] setting up certificates
	I0729 14:38:30.230070 1039759 provision.go:84] configureAuth start
	I0729 14:38:30.230090 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetMachineName
	I0729 14:38:30.230436 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:30.233150 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.233501 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.233533 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.233719 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.236157 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.236494 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.236534 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.236713 1039759 provision.go:143] copyHostCerts
	I0729 14:38:30.236786 1039759 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:38:30.236797 1039759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:38:30.236856 1039759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:38:30.236976 1039759 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:38:30.236986 1039759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:38:30.237006 1039759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:38:30.237071 1039759 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:38:30.237078 1039759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:38:30.237095 1039759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:38:30.237153 1039759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-360866 san=[127.0.0.1 192.168.39.71 localhost minikube old-k8s-version-360866]
	I0729 14:38:30.680859 1039759 provision.go:177] copyRemoteCerts
	I0729 14:38:30.680933 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:38:30.680970 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.683890 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.684229 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.684262 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.684430 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:30.684634 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.684822 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:30.684973 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:30.770659 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:38:30.799011 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 14:38:30.825536 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:38:30.850751 1039759 provision.go:87] duration metric: took 620.664228ms to configureAuth
	I0729 14:38:30.850795 1039759 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:38:30.850998 1039759 config.go:182] Loaded profile config "old-k8s-version-360866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 14:38:30.851072 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.853735 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.854065 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.854102 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.854197 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:30.854408 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.854559 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.854717 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:30.854961 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:30.855169 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:30.855187 1039759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:38:31.119354 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:38:31.119386 1039759 machine.go:97] duration metric: took 1.230472142s to provisionDockerMachine
	I0729 14:38:31.119401 1039759 start.go:293] postStartSetup for "old-k8s-version-360866" (driver="kvm2")
	I0729 14:38:31.119415 1039759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:38:31.119456 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.119885 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:38:31.119926 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.123196 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.123576 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.123607 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.123826 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.124053 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.124276 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.124469 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:31.208607 1039759 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:38:31.213173 1039759 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:38:31.213206 1039759 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:38:31.213268 1039759 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:38:31.213352 1039759 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:38:31.213454 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:38:31.225256 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:31.253156 1039759 start.go:296] duration metric: took 133.735669ms for postStartSetup
	I0729 14:38:31.253208 1039759 fix.go:56] duration metric: took 19.124042428s for fixHost
	I0729 14:38:31.253237 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.256005 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.256340 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.256375 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.256535 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.256732 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.256927 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.257075 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.257272 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:31.257445 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:31.257455 1039759 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:38:31.361488 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263911.340365932
	
	I0729 14:38:31.361512 1039759 fix.go:216] guest clock: 1722263911.340365932
	I0729 14:38:31.361519 1039759 fix.go:229] Guest: 2024-07-29 14:38:31.340365932 +0000 UTC Remote: 2024-07-29 14:38:31.253213714 +0000 UTC m=+217.413183116 (delta=87.152218ms)
	I0729 14:38:31.361572 1039759 fix.go:200] guest clock delta is within tolerance: 87.152218ms
	I0729 14:38:31.361583 1039759 start.go:83] releasing machines lock for "old-k8s-version-360866", held for 19.232453759s
	I0729 14:38:31.361611 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.361921 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:31.364981 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.365412 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.365441 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.365648 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.366227 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.366482 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.366583 1039759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:38:31.366644 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.366761 1039759 ssh_runner.go:195] Run: cat /version.json
	I0729 14:38:31.366797 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.369658 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.369699 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.370051 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.370081 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.370105 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.370125 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.370309 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.370325 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.370567 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.370568 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.370773 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.370809 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.370958 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:31.370957 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:31.472108 1039759 ssh_runner.go:195] Run: systemctl --version
	I0729 14:38:31.478939 1039759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:38:31.630720 1039759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:38:31.637768 1039759 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:38:31.637874 1039759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:38:31.655476 1039759 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:38:31.655504 1039759 start.go:495] detecting cgroup driver to use...
	I0729 14:38:31.655584 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:38:31.679387 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:38:31.704260 1039759 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:38:31.704318 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:38:31.727875 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:38:31.743197 1039759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:38:31.867502 1039759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:38:32.035088 1039759 docker.go:233] disabling docker service ...
	I0729 14:38:32.035169 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:38:32.050118 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:38:32.064828 1039759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:38:32.202938 1039759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:38:32.333330 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:38:32.348845 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:38:32.369848 1039759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 14:38:32.369923 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.381787 1039759 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:38:32.381893 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.394331 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.405323 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.417259 1039759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:38:32.428997 1039759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:38:32.440934 1039759 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:38:32.441003 1039759 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:38:32.454949 1039759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:38:32.466042 1039759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:32.596308 1039759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:38:32.762548 1039759 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:38:32.762632 1039759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:38:32.768336 1039759 start.go:563] Will wait 60s for crictl version
	I0729 14:38:32.768447 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:32.772850 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:38:32.829827 1039759 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:38:32.829936 1039759 ssh_runner.go:195] Run: crio --version
	I0729 14:38:32.863269 1039759 ssh_runner.go:195] Run: crio --version
	I0729 14:38:32.897768 1039759 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 14:38:32.899209 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:32.902257 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:32.902649 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:32.902680 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:32.902928 1039759 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 14:38:32.908590 1039759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:32.921952 1039759 kubeadm.go:883] updating cluster {Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:38:32.922094 1039759 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 14:38:32.922141 1039759 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:32.969932 1039759 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 14:38:32.970003 1039759 ssh_runner.go:195] Run: which lz4
	I0729 14:38:32.974564 1039759 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 14:38:32.980128 1039759 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 14:38:32.980173 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 14:38:32.795590 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:35.295541 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:31.750580 1038758 main.go:141] libmachine: (no-preload-603534) Waiting to get IP...
	I0729 14:38:31.751732 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:31.752236 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:31.752340 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:31.752236 1040763 retry.go:31] will retry after 239.008836ms: waiting for machine to come up
	I0729 14:38:31.993011 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:31.993538 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:31.993569 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:31.993481 1040763 retry.go:31] will retry after 288.863538ms: waiting for machine to come up
	I0729 14:38:32.284306 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:32.284941 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:32.284980 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:32.284867 1040763 retry.go:31] will retry after 410.903425ms: waiting for machine to come up
	I0729 14:38:32.697686 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:32.698291 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:32.698322 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:32.698227 1040763 retry.go:31] will retry after 423.090324ms: waiting for machine to come up
	I0729 14:38:33.122914 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:33.123550 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:33.123579 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:33.123500 1040763 retry.go:31] will retry after 744.030348ms: waiting for machine to come up
	I0729 14:38:33.869849 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:33.870499 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:33.870523 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:33.870456 1040763 retry.go:31] will retry after 888.516658ms: waiting for machine to come up
	I0729 14:38:34.760145 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:34.760594 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:34.760627 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:34.760534 1040763 retry.go:31] will retry after 889.371631ms: waiting for machine to come up
	I0729 14:38:35.651169 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:35.651700 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:35.651731 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:35.651636 1040763 retry.go:31] will retry after 1.200333492s: waiting for machine to come up
	I0729 14:38:33.181695 1039440 pod_ready.go:102] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:35.672201 1039440 pod_ready.go:102] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:34.707140 1039759 crio.go:462] duration metric: took 1.732619622s to copy over tarball
	I0729 14:38:34.707232 1039759 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 14:38:37.740076 1039759 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.032804006s)
	I0729 14:38:37.740105 1039759 crio.go:469] duration metric: took 3.032930405s to extract the tarball
	I0729 14:38:37.740113 1039759 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 14:38:37.786934 1039759 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:37.827451 1039759 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 14:38:37.827484 1039759 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 14:38:37.827576 1039759 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:37.827606 1039759 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:37.827624 1039759 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 14:38:37.827678 1039759 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:37.827702 1039759 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:37.827607 1039759 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:37.827683 1039759 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 14:38:37.827678 1039759 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:37.829621 1039759 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:37.829709 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:37.829714 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:37.829714 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:37.829724 1039759 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 14:38:37.829628 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:37.829808 1039759 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 14:38:37.829625 1039759 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:38.113249 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:38.373433 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:38.378382 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:38.380909 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:38.382431 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 14:38:38.391678 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:38.392565 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:38.419739 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 14:38:38.491174 1039759 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 14:38:38.491255 1039759 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:38.491320 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.570681 1039759 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 14:38:38.570784 1039759 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 14:38:38.570832 1039759 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 14:38:38.570889 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.570792 1039759 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:38.570721 1039759 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 14:38:38.570966 1039759 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:38.570977 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.570992 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.576687 1039759 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 14:38:38.576728 1039759 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:38.576769 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.587650 1039759 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 14:38:38.587699 1039759 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:38.587701 1039759 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 14:38:38.587738 1039759 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 14:38:38.587753 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.587791 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.587866 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:38.587883 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 14:38:38.587913 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:38.587948 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:38.591209 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:38.599567 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:38.610869 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 14:38:38.742939 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 14:38:38.742974 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 14:38:38.743091 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 14:38:38.743098 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 14:38:38.745789 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 14:38:38.745857 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 14:38:38.753643 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 14:38:38.753704 1039759 cache_images.go:92] duration metric: took 926.203812ms to LoadCachedImages
	W0729 14:38:38.753790 1039759 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0729 14:38:38.753804 1039759 kubeadm.go:934] updating node { 192.168.39.71 8443 v1.20.0 crio true true} ...
	I0729 14:38:38.753931 1039759 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-360866 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:38:38.753992 1039759 ssh_runner.go:195] Run: crio config
	I0729 14:38:38.802220 1039759 cni.go:84] Creating CNI manager for ""
	I0729 14:38:38.802246 1039759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:38:38.802258 1039759 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:38:38.802285 1039759 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.71 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-360866 NodeName:old-k8s-version-360866 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.71"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.71 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 14:38:38.802487 1039759 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.71
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-360866"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.71
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.71"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:38:38.802591 1039759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 14:38:38.816832 1039759 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:38:38.816934 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:38:38.827468 1039759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 14:38:38.847125 1039759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 14:38:38.865302 1039759 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 14:38:37.795799 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:40.294979 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:36.853388 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:36.853944 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:36.853979 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:36.853881 1040763 retry.go:31] will retry after 1.750535475s: waiting for machine to come up
	I0729 14:38:38.605644 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:38.606135 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:38.606185 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:38.606079 1040763 retry.go:31] will retry after 2.245294623s: waiting for machine to come up
	I0729 14:38:40.853761 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:40.854277 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:40.854311 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:40.854214 1040763 retry.go:31] will retry after 1.864975071s: waiting for machine to come up
	I0729 14:38:38.299326 1039440 pod_ready.go:102] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:39.170692 1039440 pod_ready.go:92] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:39.170720 1039440 pod_ready.go:81] duration metric: took 8.008696752s for pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:39.170735 1039440 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:39.177419 1039440 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:39.177449 1039440 pod_ready.go:81] duration metric: took 6.705958ms for pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:39.177463 1039440 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.185538 1039440 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:41.185566 1039440 pod_ready.go:81] duration metric: took 2.008093791s for pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.185580 1039440 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-p6dv5" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.193833 1039440 pod_ready.go:92] pod "kube-proxy-p6dv5" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:41.193864 1039440 pod_ready.go:81] duration metric: took 8.275486ms for pod "kube-proxy-p6dv5" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.193878 1039440 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.200931 1039440 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:41.200963 1039440 pod_ready.go:81] duration metric: took 7.075212ms for pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.200978 1039440 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:38.884267 1039759 ssh_runner.go:195] Run: grep 192.168.39.71	control-plane.minikube.internal$ /etc/hosts
	I0729 14:38:38.889206 1039759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.71	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:38.905643 1039759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:39.032065 1039759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:38:39.051892 1039759 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866 for IP: 192.168.39.71
	I0729 14:38:39.051991 1039759 certs.go:194] generating shared ca certs ...
	I0729 14:38:39.052019 1039759 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:39.052203 1039759 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:38:39.052258 1039759 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:38:39.052270 1039759 certs.go:256] generating profile certs ...
	I0729 14:38:39.091359 1039759 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/client.key
	I0729 14:38:39.091485 1039759 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.key.98c2aed0
	I0729 14:38:39.091554 1039759 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.key
	I0729 14:38:39.091718 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:38:39.091763 1039759 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:38:39.091776 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:38:39.091804 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:38:39.091835 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:38:39.091867 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:38:39.091924 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:39.092850 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:38:39.125528 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:38:39.153093 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:38:39.181324 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:38:39.235516 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 14:38:39.262599 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 14:38:39.293085 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:38:39.326318 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 14:38:39.361548 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:38:39.386876 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:38:39.412529 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:38:39.438418 1039759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:38:39.459519 1039759 ssh_runner.go:195] Run: openssl version
	I0729 14:38:39.466109 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:38:39.477941 1039759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:39.482748 1039759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:39.482820 1039759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:39.489099 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:38:39.500207 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:38:39.511513 1039759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:38:39.516125 1039759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:38:39.516183 1039759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:38:39.522297 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:38:39.533536 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:38:39.544996 1039759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:38:39.549681 1039759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:38:39.549733 1039759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:38:39.556332 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:38:39.571393 1039759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:38:39.578420 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:38:39.586316 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:38:39.593450 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:38:39.600604 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:38:39.607483 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:38:39.614692 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:38:39.621776 1039759 kubeadm.go:392] StartCluster: {Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:38:39.621893 1039759 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:38:39.621955 1039759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:38:39.673544 1039759 cri.go:89] found id: ""
	I0729 14:38:39.673634 1039759 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:38:39.687887 1039759 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 14:38:39.687912 1039759 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 14:38:39.687963 1039759 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 14:38:39.701616 1039759 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:38:39.702914 1039759 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-360866" does not appear in /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:38:39.703576 1039759 kubeconfig.go:62] /home/jenkins/minikube-integration/19338-974764/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-360866" cluster setting kubeconfig missing "old-k8s-version-360866" context setting]
	I0729 14:38:39.704951 1039759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:39.715056 1039759 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 14:38:39.728384 1039759 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.71
	I0729 14:38:39.728448 1039759 kubeadm.go:1160] stopping kube-system containers ...
	I0729 14:38:39.728466 1039759 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 14:38:39.728534 1039759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:38:39.778476 1039759 cri.go:89] found id: ""
	I0729 14:38:39.778561 1039759 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 14:38:39.800712 1039759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:38:39.813243 1039759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:38:39.813265 1039759 kubeadm.go:157] found existing configuration files:
	
	I0729 14:38:39.813323 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:38:39.824822 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:38:39.824897 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:38:39.834966 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:38:39.847660 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:38:39.847887 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:38:39.861117 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:38:39.873868 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:38:39.873936 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:38:39.884195 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:38:39.895155 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:38:39.895234 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:38:39.909138 1039759 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:38:39.920721 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:40.055932 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.173909 1039759 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.117933178s)
	I0729 14:38:41.173947 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.419684 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.550852 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.655941 1039759 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:38:41.656040 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:42.156080 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:42.656948 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:43.157127 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:43.656087 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:42.794217 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:45.293634 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:42.720182 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:42.720674 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:42.720701 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:42.720614 1040763 retry.go:31] will retry after 2.929394717s: waiting for machine to come up
	I0729 14:38:45.653508 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:45.654044 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:45.654069 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:45.653993 1040763 retry.go:31] will retry after 4.133064498s: waiting for machine to come up
	I0729 14:38:43.208287 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:45.706607 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:44.156583 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:44.657199 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:45.156268 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:45.656786 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:46.156393 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:46.656151 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:47.156507 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:47.656922 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:48.156840 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:48.656756 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:47.294322 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:49.795189 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:49.789721 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.790248 1038758 main.go:141] libmachine: (no-preload-603534) Found IP for machine: 192.168.61.116
	I0729 14:38:49.790272 1038758 main.go:141] libmachine: (no-preload-603534) Reserving static IP address...
	I0729 14:38:49.790290 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has current primary IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.790823 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "no-preload-603534", mac: "52:54:00:bf:94:45", ip: "192.168.61.116"} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:49.790860 1038758 main.go:141] libmachine: (no-preload-603534) Reserved static IP address: 192.168.61.116
	I0729 14:38:49.790883 1038758 main.go:141] libmachine: (no-preload-603534) DBG | skip adding static IP to network mk-no-preload-603534 - found existing host DHCP lease matching {name: "no-preload-603534", mac: "52:54:00:bf:94:45", ip: "192.168.61.116"}
	I0729 14:38:49.790920 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Getting to WaitForSSH function...
	I0729 14:38:49.790937 1038758 main.go:141] libmachine: (no-preload-603534) Waiting for SSH to be available...
	I0729 14:38:49.793243 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.793646 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:49.793679 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.793820 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Using SSH client type: external
	I0729 14:38:49.793850 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa (-rw-------)
	I0729 14:38:49.793884 1038758 main.go:141] libmachine: (no-preload-603534) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:38:49.793899 1038758 main.go:141] libmachine: (no-preload-603534) DBG | About to run SSH command:
	I0729 14:38:49.793961 1038758 main.go:141] libmachine: (no-preload-603534) DBG | exit 0
	I0729 14:38:49.924827 1038758 main.go:141] libmachine: (no-preload-603534) DBG | SSH cmd err, output: <nil>: 
	I0729 14:38:49.925188 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetConfigRaw
	I0729 14:38:49.925835 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetIP
	I0729 14:38:49.928349 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.928799 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:49.928830 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.929091 1038758 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/config.json ...
	I0729 14:38:49.929313 1038758 machine.go:94] provisionDockerMachine start ...
	I0729 14:38:49.929334 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:49.929556 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:49.932040 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.932431 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:49.932473 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.932629 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:49.932798 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:49.932930 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:49.933033 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:49.933142 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:49.933313 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:49.933324 1038758 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:38:50.049016 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 14:38:50.049059 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:38:50.049328 1038758 buildroot.go:166] provisioning hostname "no-preload-603534"
	I0729 14:38:50.049354 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:38:50.049566 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.052138 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.052532 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.052561 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.052736 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.052918 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.053093 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.053269 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.053462 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:50.053641 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:50.053653 1038758 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-603534 && echo "no-preload-603534" | sudo tee /etc/hostname
	I0729 14:38:50.189302 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-603534
	
	I0729 14:38:50.189341 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.192559 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.192949 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.192974 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.193248 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.193476 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.193689 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.193870 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.194082 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:50.194305 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:50.194329 1038758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-603534' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-603534/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-603534' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:38:50.322506 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:38:50.322540 1038758 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:38:50.322564 1038758 buildroot.go:174] setting up certificates
	I0729 14:38:50.322577 1038758 provision.go:84] configureAuth start
	I0729 14:38:50.322589 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:38:50.322938 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetIP
	I0729 14:38:50.325594 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.325957 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.325994 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.326139 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.328455 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.328803 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.328828 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.328950 1038758 provision.go:143] copyHostCerts
	I0729 14:38:50.329015 1038758 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:38:50.329025 1038758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:38:50.329078 1038758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:38:50.329165 1038758 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:38:50.329173 1038758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:38:50.329192 1038758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:38:50.329243 1038758 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:38:50.329249 1038758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:38:50.329264 1038758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:38:50.329310 1038758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.no-preload-603534 san=[127.0.0.1 192.168.61.116 localhost minikube no-preload-603534]
	I0729 14:38:50.447706 1038758 provision.go:177] copyRemoteCerts
	I0729 14:38:50.447777 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:38:50.447810 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.450714 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.451106 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.451125 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.451444 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.451679 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.451855 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.451975 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:38:50.539025 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:38:50.567887 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:38:50.594581 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 14:38:50.619475 1038758 provision.go:87] duration metric: took 296.880769ms to configureAuth
	I0729 14:38:50.619509 1038758 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:38:50.619708 1038758 config.go:182] Loaded profile config "no-preload-603534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 14:38:50.619797 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.622753 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.623121 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.623151 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.623331 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.623519 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.623684 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.623813 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.623971 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:50.624151 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:50.624168 1038758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:38:50.895618 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:38:50.895649 1038758 machine.go:97] duration metric: took 966.320375ms to provisionDockerMachine
	I0729 14:38:50.895662 1038758 start.go:293] postStartSetup for "no-preload-603534" (driver="kvm2")
	I0729 14:38:50.895684 1038758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:38:50.895717 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:50.896084 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:38:50.896112 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.899586 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.899998 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.900031 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.900168 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.900424 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.900622 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.900799 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:38:50.987195 1038758 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:38:50.991924 1038758 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:38:50.991952 1038758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:38:50.992025 1038758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:38:50.992111 1038758 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:38:50.992208 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:38:51.002048 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:51.029714 1038758 start.go:296] duration metric: took 134.037621ms for postStartSetup
	I0729 14:38:51.029758 1038758 fix.go:56] duration metric: took 19.66799406s for fixHost
	I0729 14:38:51.029782 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:51.032495 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.032819 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:51.032844 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.033049 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:51.033236 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:51.033377 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:51.033587 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:51.033767 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:51.034007 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:51.034021 1038758 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:38:51.149481 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263931.130931233
	
	I0729 14:38:51.149510 1038758 fix.go:216] guest clock: 1722263931.130931233
	I0729 14:38:51.149520 1038758 fix.go:229] Guest: 2024-07-29 14:38:51.130931233 +0000 UTC Remote: 2024-07-29 14:38:51.029761931 +0000 UTC m=+354.409484230 (delta=101.169302ms)
	I0729 14:38:51.149575 1038758 fix.go:200] guest clock delta is within tolerance: 101.169302ms
	I0729 14:38:51.149583 1038758 start.go:83] releasing machines lock for "no-preload-603534", held for 19.787859214s
	I0729 14:38:51.149617 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:51.149923 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetIP
	I0729 14:38:51.152671 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.153054 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:51.153081 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.153298 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:51.153898 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:51.154092 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:51.154192 1038758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:38:51.154245 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:51.154349 1038758 ssh_runner.go:195] Run: cat /version.json
	I0729 14:38:51.154378 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:51.157173 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.157200 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.157560 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:51.157592 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.157635 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:51.157654 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.157955 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:51.157976 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:51.158169 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:51.158195 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:51.158370 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:51.158381 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:51.158565 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:38:51.158572 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:38:51.260806 1038758 ssh_runner.go:195] Run: systemctl --version
	I0729 14:38:51.266847 1038758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:38:51.412637 1038758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:38:51.418879 1038758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:38:51.418954 1038758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:38:51.435946 1038758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:38:51.435978 1038758 start.go:495] detecting cgroup driver to use...
	I0729 14:38:51.436061 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:38:51.457517 1038758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:38:51.472718 1038758 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:38:51.472811 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:38:51.487062 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:38:51.501410 1038758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:38:51.617292 1038758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:38:47.708063 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:49.708506 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:52.209337 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:51.764302 1038758 docker.go:233] disabling docker service ...
	I0729 14:38:51.764386 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:38:51.779137 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:38:51.794372 1038758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:38:51.930402 1038758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:38:52.062691 1038758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:38:52.076796 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:38:52.095912 1038758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 14:38:52.095994 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.107507 1038758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:38:52.107588 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.119470 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.131252 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.141672 1038758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:38:52.152086 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.163682 1038758 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.189614 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.200279 1038758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:38:52.211878 1038758 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:38:52.211943 1038758 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:38:52.224909 1038758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:38:52.234312 1038758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:52.357370 1038758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:38:52.492520 1038758 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:38:52.492622 1038758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:38:52.497537 1038758 start.go:563] Will wait 60s for crictl version
	I0729 14:38:52.497595 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:52.501292 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:38:52.544320 1038758 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:38:52.544428 1038758 ssh_runner.go:195] Run: crio --version
	I0729 14:38:52.575452 1038758 ssh_runner.go:195] Run: crio --version
	I0729 14:38:52.605920 1038758 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 14:38:49.156539 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:49.656397 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:50.156909 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:50.656968 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:51.156321 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:51.656183 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:52.157099 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:52.656725 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:53.157009 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:53.656787 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:51.796331 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:53.799083 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:52.607410 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetIP
	I0729 14:38:52.610017 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:52.610296 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:52.610330 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:52.610553 1038758 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 14:38:52.614659 1038758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:52.626967 1038758 kubeadm.go:883] updating cluster {Name:no-preload-603534 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-603534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:38:52.627087 1038758 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 14:38:52.627124 1038758 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:52.662824 1038758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 14:38:52.662852 1038758 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 14:38:52.662901 1038758 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:52.662968 1038758 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:52.663040 1038758 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 14:38:52.663043 1038758 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:52.663066 1038758 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:52.662987 1038758 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:52.662987 1038758 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:52.663017 1038758 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:52.664360 1038758 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 14:38:52.664947 1038758 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:52.664965 1038758 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:52.664954 1038758 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:52.665015 1038758 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:52.665023 1038758 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:52.665351 1038758 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:52.665423 1038758 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:52.829143 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:52.829158 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:52.829541 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:52.851797 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:52.866728 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 14:38:52.884604 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:52.893636 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:52.946087 1038758 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 14:38:52.946134 1038758 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 14:38:52.946160 1038758 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:52.946170 1038758 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:52.946173 1038758 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 14:38:52.946192 1038758 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:52.946216 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:52.946221 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:52.946217 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:52.954361 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:53.001715 1038758 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 14:38:53.001766 1038758 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:53.001826 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:53.106651 1038758 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 14:38:53.106713 1038758 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:53.106770 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:53.106838 1038758 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 14:38:53.106883 1038758 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:53.106921 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:53.106927 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:53.106962 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:53.107012 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:53.107038 1038758 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 14:38:53.107067 1038758 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:53.107079 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:53.107092 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:53.131562 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:53.212076 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:53.212199 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 14:38:53.212272 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 14:38:53.214338 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 14:38:53.214430 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 14:38:53.216771 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:53.216941 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 14:38:53.217037 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 14:38:53.220214 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 14:38:53.220306 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 14:38:53.272021 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 14:38:53.272140 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 14:38:53.275939 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 14:38:53.275988 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 14:38:53.276008 1038758 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 14:38:53.276009 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 14:38:53.276029 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 14:38:53.276054 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 14:38:53.301528 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 14:38:53.301578 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 14:38:53.301600 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 14:38:53.301647 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 14:38:53.301759 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 14:38:55.357295 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.08120738s)
	I0729 14:38:55.357329 1038758 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.081270007s)
	I0729 14:38:55.357371 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 14:38:55.357338 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 14:38:55.357384 1038758 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.055605102s)
	I0729 14:38:55.357406 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 14:38:55.357407 1038758 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 14:38:55.357464 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 14:38:54.708330 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:57.207468 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:54.156921 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:54.656957 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:55.156201 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:55.656783 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:56.156180 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:56.656984 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:57.156610 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:57.656127 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:58.156785 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:58.656192 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:56.295143 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:58.795511 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:57.217512 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.860011805s)
	I0729 14:38:57.217539 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 14:38:57.217570 1038758 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 14:38:57.217634 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 14:38:59.187398 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.969733063s)
	I0729 14:38:59.187443 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 14:38:59.187482 1038758 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 14:38:59.187562 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 14:39:01.138568 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.950970137s)
	I0729 14:39:01.138617 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 14:39:01.138654 1038758 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 14:39:01.138740 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 14:38:59.207657 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:01.208795 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:59.156740 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:59.656223 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:00.156726 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:00.656593 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:01.156115 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:01.656364 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:02.157069 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:02.656491 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:03.156938 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:03.656898 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:01.293858 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:03.484613 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:05.793953 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:04.231830 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.093043665s)
	I0729 14:39:04.231866 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 14:39:04.231897 1038758 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 14:39:04.231963 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 14:39:05.182458 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 14:39:05.182512 1038758 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 14:39:05.182566 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 14:39:03.209198 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:05.707557 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:04.157177 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:04.656505 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:05.156530 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:05.656389 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:06.156606 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:06.657121 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:07.157048 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:07.656497 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:08.156327 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:08.656868 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:07.794522 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:09.794886 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:07.253615 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.070972791s)
	I0729 14:39:07.253665 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 14:39:07.253700 1038758 cache_images.go:123] Successfully loaded all cached images
	I0729 14:39:07.253707 1038758 cache_images.go:92] duration metric: took 14.590842072s to LoadCachedImages
	I0729 14:39:07.253720 1038758 kubeadm.go:934] updating node { 192.168.61.116 8443 v1.31.0-beta.0 crio true true} ...
	I0729 14:39:07.253899 1038758 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-603534 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-603534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:39:07.253980 1038758 ssh_runner.go:195] Run: crio config
	I0729 14:39:07.309694 1038758 cni.go:84] Creating CNI manager for ""
	I0729 14:39:07.309720 1038758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:39:07.309731 1038758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:39:07.309754 1038758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-603534 NodeName:no-preload-603534 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 14:39:07.309916 1038758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-603534"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:39:07.309985 1038758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 14:39:07.321876 1038758 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:39:07.321967 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:39:07.333057 1038758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0729 14:39:07.350193 1038758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 14:39:07.367171 1038758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0729 14:39:07.384123 1038758 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0729 14:39:07.387896 1038758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:39:07.400317 1038758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:39:07.525822 1038758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:39:07.545142 1038758 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534 for IP: 192.168.61.116
	I0729 14:39:07.545167 1038758 certs.go:194] generating shared ca certs ...
	I0729 14:39:07.545189 1038758 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:39:07.545389 1038758 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:39:07.545448 1038758 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:39:07.545463 1038758 certs.go:256] generating profile certs ...
	I0729 14:39:07.545582 1038758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/client.key
	I0729 14:39:07.545658 1038758 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/apiserver.key.117a155a
	I0729 14:39:07.545725 1038758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/proxy-client.key
	I0729 14:39:07.545881 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:39:07.545913 1038758 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:39:07.545922 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:39:07.545945 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:39:07.545969 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:39:07.545990 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:39:07.546027 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:39:07.546679 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:39:07.582488 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:39:07.617327 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:39:07.647627 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:39:07.685799 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 14:39:07.720365 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 14:39:07.744627 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:39:07.771409 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 14:39:07.797570 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:39:07.820888 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:39:07.843714 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:39:07.867365 1038758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:39:07.884283 1038758 ssh_runner.go:195] Run: openssl version
	I0729 14:39:07.890379 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:39:07.901894 1038758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:39:07.906431 1038758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:39:07.906487 1038758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:39:07.912284 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:39:07.923393 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:39:07.934119 1038758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:39:07.938563 1038758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:39:07.938620 1038758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:39:07.944115 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:39:07.954815 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:39:07.965864 1038758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:39:07.970695 1038758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:39:07.970761 1038758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:39:07.977340 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:39:07.990416 1038758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:39:07.995446 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:39:08.001615 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:39:08.007621 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:39:08.013648 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:39:08.019525 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:39:08.025505 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:39:08.031480 1038758 kubeadm.go:392] StartCluster: {Name:no-preload-603534 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-603534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:39:08.031592 1038758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:39:08.031657 1038758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:39:08.077847 1038758 cri.go:89] found id: ""
	I0729 14:39:08.077936 1038758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:39:08.088616 1038758 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 14:39:08.088639 1038758 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 14:39:08.088704 1038758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 14:39:08.101150 1038758 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:39:08.102305 1038758 kubeconfig.go:125] found "no-preload-603534" server: "https://192.168.61.116:8443"
	I0729 14:39:08.105529 1038758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 14:39:08.117031 1038758 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.116
	I0729 14:39:08.117070 1038758 kubeadm.go:1160] stopping kube-system containers ...
	I0729 14:39:08.117085 1038758 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 14:39:08.117148 1038758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:39:08.171626 1038758 cri.go:89] found id: ""
	I0729 14:39:08.171706 1038758 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 14:39:08.190491 1038758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:39:08.200776 1038758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:39:08.200806 1038758 kubeadm.go:157] found existing configuration files:
	
	I0729 14:39:08.200873 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:39:08.211430 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:39:08.211483 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:39:08.221865 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:39:08.231668 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:39:08.231719 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:39:08.242027 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:39:08.251585 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:39:08.251639 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:39:08.261521 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:39:08.271210 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:39:08.271284 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:39:08.281112 1038758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:39:08.290948 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:08.417397 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:09.400064 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:09.590859 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:09.670134 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:09.781580 1038758 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:39:09.781719 1038758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.282592 1038758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.781923 1038758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.843114 1038758 api_server.go:72] duration metric: took 1.061535691s to wait for apiserver process to appear ...
	I0729 14:39:10.843151 1038758 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:39:10.843182 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:10.843715 1038758 api_server.go:269] stopped: https://192.168.61.116:8443/healthz: Get "https://192.168.61.116:8443/healthz": dial tcp 192.168.61.116:8443: connect: connection refused
	I0729 14:39:11.343301 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:08.207563 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:10.208276 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:09.156858 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:09.656910 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.156126 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.657149 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:11.156223 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:11.657184 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:12.156454 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:12.656896 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:13.156693 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:13.656971 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:13.993249 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:39:13.993278 1038758 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:39:13.993298 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:14.011972 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:39:14.012012 1038758 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:39:14.343228 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:14.347946 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:39:14.347983 1038758 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:39:14.844144 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:14.858278 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:39:14.858311 1038758 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:39:15.343885 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:15.350223 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0729 14:39:15.360468 1038758 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 14:39:15.360513 1038758 api_server.go:131] duration metric: took 4.517353977s to wait for apiserver health ...
	I0729 14:39:15.360524 1038758 cni.go:84] Creating CNI manager for ""
	I0729 14:39:15.360532 1038758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:39:15.362679 1038758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:39:12.293516 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:14.294107 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:15.364237 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:39:15.379974 1038758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:39:15.422444 1038758 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:39:15.441468 1038758 system_pods.go:59] 8 kube-system pods found
	I0729 14:39:15.441512 1038758 system_pods.go:61] "coredns-5cfdc65f69-tjdx4" [986cdef3-de61-4c0f-bc75-fae4f6b44a37] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 14:39:15.441525 1038758 system_pods.go:61] "etcd-no-preload-603534" [e27f5761-5322-4d88-b90a-bcff42c9dfa5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 14:39:15.441537 1038758 system_pods.go:61] "kube-apiserver-no-preload-603534" [33ed9f7c-1240-40cf-b51d-125b3473bfd5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 14:39:15.441547 1038758 system_pods.go:61] "kube-controller-manager-no-preload-603534" [f79520a2-380e-4d8a-b1ff-78c6cd3d3741] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 14:39:15.441559 1038758 system_pods.go:61] "kube-proxy-ftpk5" [a5471ad7-5fd3-49b7-8631-4ca2962761d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 14:39:15.441568 1038758 system_pods.go:61] "kube-scheduler-no-preload-603534" [860e262c-f036-4181-a0ad-8ba0058a47d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 14:39:15.441580 1038758 system_pods.go:61] "metrics-server-78fcd8795b-59sbc" [8af92987-ce8d-434f-93de-16d0adc35fa5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:39:15.441598 1038758 system_pods.go:61] "storage-provisioner" [579d0cc8-e30e-4ee3-ac55-c2f0bc5871e1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 14:39:15.441606 1038758 system_pods.go:74] duration metric: took 19.133029ms to wait for pod list to return data ...
	I0729 14:39:15.441618 1038758 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:39:15.445594 1038758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:39:15.445630 1038758 node_conditions.go:123] node cpu capacity is 2
	I0729 14:39:15.445646 1038758 node_conditions.go:105] duration metric: took 4.019018ms to run NodePressure ...
	I0729 14:39:15.445678 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:15.743404 1038758 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 14:39:15.751028 1038758 kubeadm.go:739] kubelet initialised
	I0729 14:39:15.751050 1038758 kubeadm.go:740] duration metric: took 7.619795ms waiting for restarted kubelet to initialise ...
	I0729 14:39:15.751059 1038758 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:39:15.759157 1038758 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:12.708704 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:15.208434 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:14.157127 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:14.656806 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:15.156564 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:15.656881 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:16.156239 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:16.656440 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:17.157130 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:17.656240 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:18.156161 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:18.656808 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:16.294741 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:18.797700 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:17.768132 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:20.265670 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:17.709929 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:20.206710 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:22.207809 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:19.156721 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:19.656766 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:20.156352 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:20.656788 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:21.156179 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:21.656213 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:22.156475 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:22.656275 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:23.156592 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:23.656979 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:21.294265 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:23.294366 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:25.794648 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:22.265947 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:24.266644 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:24.708214 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:27.208824 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:24.156798 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:24.656473 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:25.156551 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:25.656356 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:26.156887 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:26.656332 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:27.156494 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:27.656839 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:28.156763 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:28.656512 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:27.795415 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:30.293460 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:26.766260 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:29.265817 1038758 pod_ready.go:92] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.265851 1038758 pod_ready.go:81] duration metric: took 13.506661461s for pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.265865 1038758 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.276021 1038758 pod_ready.go:92] pod "etcd-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.276043 1038758 pod_ready.go:81] duration metric: took 10.172055ms for pod "etcd-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.276052 1038758 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.280197 1038758 pod_ready.go:92] pod "kube-apiserver-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.280215 1038758 pod_ready.go:81] duration metric: took 4.156785ms for pod "kube-apiserver-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.280223 1038758 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.284076 1038758 pod_ready.go:92] pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.284096 1038758 pod_ready.go:81] duration metric: took 3.865927ms for pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.284122 1038758 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ftpk5" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.288280 1038758 pod_ready.go:92] pod "kube-proxy-ftpk5" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.288297 1038758 pod_ready.go:81] duration metric: took 4.16843ms for pod "kube-proxy-ftpk5" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.288305 1038758 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.666771 1038758 pod_ready.go:92] pod "kube-scheduler-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.666802 1038758 pod_ready.go:81] duration metric: took 378.49001ms for pod "kube-scheduler-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.666813 1038758 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.706596 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:32.208095 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:29.156096 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:29.656289 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:30.156693 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:30.656795 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:31.156756 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:31.656888 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:32.156563 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:32.656795 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:33.156271 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:33.656562 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:32.293988 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:34.793456 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:31.674203 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:34.174002 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:34.708005 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:37.206789 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:34.157046 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:34.656398 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:35.156198 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:35.656763 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:36.156542 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:36.656994 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:37.156808 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:37.657093 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:38.156119 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:38.657017 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:36.793771 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:39.294267 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:36.676693 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:39.172713 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:41.174348 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:39.207584 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:41.707645 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:39.156909 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:39.656176 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:40.156455 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:40.656609 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:41.156891 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:41.656327 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:41.656423 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:41.701839 1039759 cri.go:89] found id: ""
	I0729 14:39:41.701863 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.701872 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:41.701878 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:41.701942 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:41.738281 1039759 cri.go:89] found id: ""
	I0729 14:39:41.738308 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.738315 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:41.738321 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:41.738377 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:41.771954 1039759 cri.go:89] found id: ""
	I0729 14:39:41.771981 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.771990 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:41.771996 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:41.772060 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:41.806157 1039759 cri.go:89] found id: ""
	I0729 14:39:41.806182 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.806190 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:41.806196 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:41.806249 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:41.841284 1039759 cri.go:89] found id: ""
	I0729 14:39:41.841312 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.841319 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:41.841325 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:41.841394 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:41.875864 1039759 cri.go:89] found id: ""
	I0729 14:39:41.875893 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.875902 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:41.875908 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:41.875962 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:41.909797 1039759 cri.go:89] found id: ""
	I0729 14:39:41.909824 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.909833 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:41.909840 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:41.909892 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:41.943886 1039759 cri.go:89] found id: ""
	I0729 14:39:41.943912 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.943920 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:41.943929 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:41.943944 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:41.983224 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:41.983254 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:42.035264 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:42.035303 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:42.049343 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:42.049369 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:42.171904 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:42.171924 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:42.171947 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:41.295209 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:43.795811 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:43.673853 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:45.674302 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:44.207555 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:46.707384 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:44.738629 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:44.753497 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:44.753582 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:44.793256 1039759 cri.go:89] found id: ""
	I0729 14:39:44.793283 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.793291 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:44.793298 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:44.793363 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:44.833698 1039759 cri.go:89] found id: ""
	I0729 14:39:44.833726 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.833733 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:44.833739 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:44.833792 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:44.876328 1039759 cri.go:89] found id: ""
	I0729 14:39:44.876357 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.876366 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:44.876372 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:44.876471 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:44.918091 1039759 cri.go:89] found id: ""
	I0729 14:39:44.918121 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.918132 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:44.918140 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:44.918210 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:44.965105 1039759 cri.go:89] found id: ""
	I0729 14:39:44.965137 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.965149 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:44.965157 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:44.965228 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:45.014119 1039759 cri.go:89] found id: ""
	I0729 14:39:45.014150 1039759 logs.go:276] 0 containers: []
	W0729 14:39:45.014162 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:45.014170 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:45.014243 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:45.059826 1039759 cri.go:89] found id: ""
	I0729 14:39:45.059858 1039759 logs.go:276] 0 containers: []
	W0729 14:39:45.059870 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:45.059879 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:45.059946 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:45.099666 1039759 cri.go:89] found id: ""
	I0729 14:39:45.099706 1039759 logs.go:276] 0 containers: []
	W0729 14:39:45.099717 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:45.099730 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:45.099748 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:45.144219 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:45.144263 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:45.199719 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:45.199754 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:45.214225 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:45.214260 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:45.289090 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:45.289119 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:45.289138 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:47.860797 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:47.874523 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:47.874606 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:47.913570 1039759 cri.go:89] found id: ""
	I0729 14:39:47.913599 1039759 logs.go:276] 0 containers: []
	W0729 14:39:47.913608 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:47.913615 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:47.913674 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:47.946699 1039759 cri.go:89] found id: ""
	I0729 14:39:47.946725 1039759 logs.go:276] 0 containers: []
	W0729 14:39:47.946734 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:47.946740 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:47.946792 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:47.986492 1039759 cri.go:89] found id: ""
	I0729 14:39:47.986533 1039759 logs.go:276] 0 containers: []
	W0729 14:39:47.986546 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:47.986554 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:47.986635 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:48.027232 1039759 cri.go:89] found id: ""
	I0729 14:39:48.027260 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.027268 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:48.027274 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:48.027327 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:48.065119 1039759 cri.go:89] found id: ""
	I0729 14:39:48.065145 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.065153 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:48.065159 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:48.065217 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:48.105087 1039759 cri.go:89] found id: ""
	I0729 14:39:48.105119 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.105128 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:48.105134 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:48.105199 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:48.144684 1039759 cri.go:89] found id: ""
	I0729 14:39:48.144718 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.144730 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:48.144737 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:48.144816 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:48.180350 1039759 cri.go:89] found id: ""
	I0729 14:39:48.180380 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.180389 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:48.180401 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:48.180436 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:48.259859 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:48.259905 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:48.301132 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:48.301163 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:48.352753 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:48.352795 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:48.365936 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:48.365969 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:48.434634 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:46.293123 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:48.293674 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:50.294113 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:47.674411 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:50.173727 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:48.707887 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:51.207444 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:50.934903 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:50.948702 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:50.948787 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:50.982889 1039759 cri.go:89] found id: ""
	I0729 14:39:50.982917 1039759 logs.go:276] 0 containers: []
	W0729 14:39:50.982927 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:50.982933 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:50.983010 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:51.020679 1039759 cri.go:89] found id: ""
	I0729 14:39:51.020713 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.020726 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:51.020734 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:51.020818 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:51.055114 1039759 cri.go:89] found id: ""
	I0729 14:39:51.055147 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.055158 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:51.055166 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:51.055237 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:51.089053 1039759 cri.go:89] found id: ""
	I0729 14:39:51.089087 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.089099 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:51.089108 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:51.089184 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:51.125823 1039759 cri.go:89] found id: ""
	I0729 14:39:51.125851 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.125861 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:51.125868 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:51.125938 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:51.162645 1039759 cri.go:89] found id: ""
	I0729 14:39:51.162683 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.162694 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:51.162702 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:51.162767 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:51.196820 1039759 cri.go:89] found id: ""
	I0729 14:39:51.196849 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.196857 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:51.196864 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:51.196937 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:51.236442 1039759 cri.go:89] found id: ""
	I0729 14:39:51.236469 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.236479 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:51.236491 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:51.236506 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:51.317077 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:51.317101 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:51.317119 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:51.398118 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:51.398172 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:51.437096 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:51.437128 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:51.488949 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:51.488992 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:52.795544 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:55.294184 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:52.174241 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:54.672702 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:53.207592 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:55.706971 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:54.004536 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:54.019400 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:54.019480 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:54.054592 1039759 cri.go:89] found id: ""
	I0729 14:39:54.054626 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.054639 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:54.054647 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:54.054712 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:54.090184 1039759 cri.go:89] found id: ""
	I0729 14:39:54.090217 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.090227 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:54.090234 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:54.090304 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:54.129977 1039759 cri.go:89] found id: ""
	I0729 14:39:54.130007 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.130016 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:54.130022 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:54.130081 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:54.170940 1039759 cri.go:89] found id: ""
	I0729 14:39:54.170970 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.170980 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:54.170988 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:54.171053 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:54.206197 1039759 cri.go:89] found id: ""
	I0729 14:39:54.206224 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.206244 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:54.206251 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:54.206340 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:54.246929 1039759 cri.go:89] found id: ""
	I0729 14:39:54.246963 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.246973 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:54.246980 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:54.247049 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:54.286202 1039759 cri.go:89] found id: ""
	I0729 14:39:54.286231 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.286240 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:54.286245 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:54.286315 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:54.321784 1039759 cri.go:89] found id: ""
	I0729 14:39:54.321815 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.321824 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:54.321837 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:54.321860 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:54.363159 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:54.363187 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:54.416151 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:54.416194 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:54.429824 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:54.429852 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:54.506348 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:54.506373 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:54.506390 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:57.094810 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:57.108163 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:57.108238 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:57.143556 1039759 cri.go:89] found id: ""
	I0729 14:39:57.143588 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.143601 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:57.143608 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:57.143678 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:57.177664 1039759 cri.go:89] found id: ""
	I0729 14:39:57.177695 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.177706 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:57.177714 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:57.177801 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:57.212046 1039759 cri.go:89] found id: ""
	I0729 14:39:57.212106 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.212231 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:57.212249 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:57.212323 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:57.252518 1039759 cri.go:89] found id: ""
	I0729 14:39:57.252549 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.252558 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:57.252564 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:57.252677 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:57.287045 1039759 cri.go:89] found id: ""
	I0729 14:39:57.287069 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.287077 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:57.287084 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:57.287141 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:57.324553 1039759 cri.go:89] found id: ""
	I0729 14:39:57.324588 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.324599 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:57.324607 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:57.324684 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:57.358761 1039759 cri.go:89] found id: ""
	I0729 14:39:57.358801 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.358811 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:57.358819 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:57.358898 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:57.402023 1039759 cri.go:89] found id: ""
	I0729 14:39:57.402051 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.402062 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:57.402074 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:57.402094 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:57.445600 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:57.445632 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:57.501876 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:57.501911 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:57.518264 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:57.518299 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:57.593247 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:57.593274 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:57.593292 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:57.793782 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:59.794287 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:56.673243 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:59.174416 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:57.707618 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:00.208574 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:00.181109 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:00.194553 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:00.194641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:00.237761 1039759 cri.go:89] found id: ""
	I0729 14:40:00.237801 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.237814 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:00.237829 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:00.237901 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:00.273113 1039759 cri.go:89] found id: ""
	I0729 14:40:00.273145 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.273157 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:00.273166 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:00.273232 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:00.312136 1039759 cri.go:89] found id: ""
	I0729 14:40:00.312169 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.312176 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:00.312182 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:00.312249 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:00.349610 1039759 cri.go:89] found id: ""
	I0729 14:40:00.349642 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.349654 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:00.349662 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:00.349792 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:00.384121 1039759 cri.go:89] found id: ""
	I0729 14:40:00.384148 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.384157 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:00.384163 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:00.384233 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:00.419684 1039759 cri.go:89] found id: ""
	I0729 14:40:00.419720 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.419731 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:00.419739 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:00.419809 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:00.453905 1039759 cri.go:89] found id: ""
	I0729 14:40:00.453937 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.453945 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:00.453951 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:00.454023 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:00.490124 1039759 cri.go:89] found id: ""
	I0729 14:40:00.490149 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.490158 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:00.490168 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:00.490185 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:00.562684 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:00.562713 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:00.562735 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:00.643860 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:00.643899 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:00.683247 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:00.683276 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:00.734131 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:00.734166 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:03.249468 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:03.262712 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:03.262788 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:03.300774 1039759 cri.go:89] found id: ""
	I0729 14:40:03.300801 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.300816 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:03.300823 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:03.300891 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:03.335367 1039759 cri.go:89] found id: ""
	I0729 14:40:03.335398 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.335409 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:03.335419 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:03.335488 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:03.375683 1039759 cri.go:89] found id: ""
	I0729 14:40:03.375717 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.375728 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:03.375734 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:03.375814 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:03.409593 1039759 cri.go:89] found id: ""
	I0729 14:40:03.409623 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.409631 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:03.409637 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:03.409711 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:03.444531 1039759 cri.go:89] found id: ""
	I0729 14:40:03.444566 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.444578 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:03.444585 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:03.444655 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:03.479446 1039759 cri.go:89] found id: ""
	I0729 14:40:03.479476 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.479487 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:03.479495 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:03.479563 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:03.517277 1039759 cri.go:89] found id: ""
	I0729 14:40:03.517311 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.517321 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:03.517329 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:03.517396 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:03.556343 1039759 cri.go:89] found id: ""
	I0729 14:40:03.556373 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.556381 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:03.556391 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:03.556422 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:03.610156 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:03.610196 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:03.624776 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:03.624812 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:03.696584 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:03.696609 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:03.696625 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:03.775066 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:03.775109 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:01.794683 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:03.795112 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:01.673980 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:04.173900 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:02.706731 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:04.707655 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:07.207027 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:06.319720 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:06.332865 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:06.332937 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:06.366576 1039759 cri.go:89] found id: ""
	I0729 14:40:06.366608 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.366631 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:06.366639 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:06.366730 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:06.402710 1039759 cri.go:89] found id: ""
	I0729 14:40:06.402735 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.402743 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:06.402748 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:06.402804 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:06.439048 1039759 cri.go:89] found id: ""
	I0729 14:40:06.439095 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.439116 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:06.439125 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:06.439196 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:06.473407 1039759 cri.go:89] found id: ""
	I0729 14:40:06.473443 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.473456 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:06.473464 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:06.473544 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:06.507278 1039759 cri.go:89] found id: ""
	I0729 14:40:06.507309 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.507319 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:06.507327 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:06.507396 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:06.541573 1039759 cri.go:89] found id: ""
	I0729 14:40:06.541600 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.541608 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:06.541617 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:06.541679 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:06.587666 1039759 cri.go:89] found id: ""
	I0729 14:40:06.587697 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.587707 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:06.587714 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:06.587773 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:06.622415 1039759 cri.go:89] found id: ""
	I0729 14:40:06.622448 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.622459 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:06.622478 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:06.622497 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:06.659987 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:06.660019 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:06.716303 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:06.716338 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:06.731051 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:06.731076 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:06.809014 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:06.809045 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:06.809064 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:06.293552 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:08.294453 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:10.295216 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:06.674445 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:09.174349 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:09.207784 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:11.208318 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:09.387843 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:09.401894 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:09.401984 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:09.439385 1039759 cri.go:89] found id: ""
	I0729 14:40:09.439425 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.439438 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:09.439446 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:09.439506 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:09.474307 1039759 cri.go:89] found id: ""
	I0729 14:40:09.474340 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.474352 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:09.474361 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:09.474434 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:09.508198 1039759 cri.go:89] found id: ""
	I0729 14:40:09.508233 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.508245 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:09.508253 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:09.508325 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:09.543729 1039759 cri.go:89] found id: ""
	I0729 14:40:09.543762 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.543772 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:09.543779 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:09.543847 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:09.598723 1039759 cri.go:89] found id: ""
	I0729 14:40:09.598760 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.598769 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:09.598775 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:09.598841 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:09.636009 1039759 cri.go:89] found id: ""
	I0729 14:40:09.636038 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.636050 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:09.636058 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:09.636126 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:09.675590 1039759 cri.go:89] found id: ""
	I0729 14:40:09.675618 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.675628 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:09.675636 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:09.675698 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:09.710331 1039759 cri.go:89] found id: ""
	I0729 14:40:09.710374 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.710385 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:09.710397 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:09.710416 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:09.790014 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:09.790046 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:09.790064 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:09.870233 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:09.870278 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:09.910421 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:09.910454 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:09.962429 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:09.962474 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:12.476775 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:12.490804 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:12.490875 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:12.529435 1039759 cri.go:89] found id: ""
	I0729 14:40:12.529466 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.529478 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:12.529485 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:12.529551 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:12.564769 1039759 cri.go:89] found id: ""
	I0729 14:40:12.564806 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.564818 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:12.564826 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:12.564912 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:12.600253 1039759 cri.go:89] found id: ""
	I0729 14:40:12.600285 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.600296 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:12.600304 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:12.600367 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:12.636112 1039759 cri.go:89] found id: ""
	I0729 14:40:12.636146 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.636155 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:12.636161 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:12.636216 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:12.675592 1039759 cri.go:89] found id: ""
	I0729 14:40:12.675621 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.675631 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:12.675639 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:12.675711 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:12.711438 1039759 cri.go:89] found id: ""
	I0729 14:40:12.711469 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.711480 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:12.711488 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:12.711554 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:12.745497 1039759 cri.go:89] found id: ""
	I0729 14:40:12.745524 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.745533 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:12.745539 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:12.745598 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:12.778934 1039759 cri.go:89] found id: ""
	I0729 14:40:12.778966 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.778977 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:12.778991 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:12.779010 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:12.854721 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:12.854759 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:12.854780 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:12.932118 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:12.932158 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:12.974429 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:12.974461 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:13.030073 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:13.030108 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:12.795056 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:15.295125 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:11.674169 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:14.173503 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:16.175691 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:13.707268 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:15.708540 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:15.544245 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:15.559013 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:15.559090 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:15.594018 1039759 cri.go:89] found id: ""
	I0729 14:40:15.594051 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.594064 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:15.594076 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:15.594147 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:15.630734 1039759 cri.go:89] found id: ""
	I0729 14:40:15.630762 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.630771 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:15.630777 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:15.630832 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:15.666159 1039759 cri.go:89] found id: ""
	I0729 14:40:15.666191 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.666202 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:15.666210 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:15.666275 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:15.701058 1039759 cri.go:89] found id: ""
	I0729 14:40:15.701088 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.701098 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:15.701115 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:15.701170 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:15.737006 1039759 cri.go:89] found id: ""
	I0729 14:40:15.737040 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.737052 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:15.737066 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:15.737139 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:15.775678 1039759 cri.go:89] found id: ""
	I0729 14:40:15.775706 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.775718 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:15.775728 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:15.775795 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:15.812239 1039759 cri.go:89] found id: ""
	I0729 14:40:15.812268 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.812276 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:15.812283 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:15.812348 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:15.847653 1039759 cri.go:89] found id: ""
	I0729 14:40:15.847682 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.847693 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:15.847707 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:15.847725 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:15.903094 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:15.903137 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:15.917060 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:15.917093 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:15.993458 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:15.993481 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:15.993499 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:16.073369 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:16.073409 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:18.616107 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:18.630263 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:18.630340 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:18.668228 1039759 cri.go:89] found id: ""
	I0729 14:40:18.668261 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.668271 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:18.668279 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:18.668342 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:18.706863 1039759 cri.go:89] found id: ""
	I0729 14:40:18.706891 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.706902 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:18.706909 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:18.706978 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:18.739703 1039759 cri.go:89] found id: ""
	I0729 14:40:18.739728 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.739736 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:18.739742 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:18.739796 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:18.777025 1039759 cri.go:89] found id: ""
	I0729 14:40:18.777066 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.777077 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:18.777085 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:18.777158 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:18.814000 1039759 cri.go:89] found id: ""
	I0729 14:40:18.814026 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.814039 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:18.814051 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:18.814116 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:18.851027 1039759 cri.go:89] found id: ""
	I0729 14:40:18.851058 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.851069 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:18.851076 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:18.851151 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:17.796245 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:20.293964 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:18.673560 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:21.173099 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:18.207376 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:20.707629 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:18.903888 1039759 cri.go:89] found id: ""
	I0729 14:40:18.903920 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.903932 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:18.903941 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:18.904002 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:18.938756 1039759 cri.go:89] found id: ""
	I0729 14:40:18.938784 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.938791 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:18.938801 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:18.938814 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:18.988482 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:18.988520 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:19.002145 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:19.002177 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:19.085372 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:19.085397 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:19.085424 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:19.171294 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:19.171387 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:21.709578 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:21.722874 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:21.722941 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:21.768110 1039759 cri.go:89] found id: ""
	I0729 14:40:21.768139 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.768150 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:21.768156 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:21.768210 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:21.808853 1039759 cri.go:89] found id: ""
	I0729 14:40:21.808886 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.808897 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:21.808905 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:21.808974 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:21.843432 1039759 cri.go:89] found id: ""
	I0729 14:40:21.843472 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.843484 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:21.843493 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:21.843576 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:21.876497 1039759 cri.go:89] found id: ""
	I0729 14:40:21.876535 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.876547 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:21.876555 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:21.876633 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:21.911528 1039759 cri.go:89] found id: ""
	I0729 14:40:21.911556 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.911565 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:21.911571 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:21.911626 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:21.944514 1039759 cri.go:89] found id: ""
	I0729 14:40:21.944548 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.944560 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:21.944569 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:21.944641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:21.978113 1039759 cri.go:89] found id: ""
	I0729 14:40:21.978151 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.978162 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:21.978170 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:21.978233 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:22.012390 1039759 cri.go:89] found id: ""
	I0729 14:40:22.012438 1039759 logs.go:276] 0 containers: []
	W0729 14:40:22.012449 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:22.012461 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:22.012484 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:22.027921 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:22.027952 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:22.095087 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:22.095115 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:22.095132 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:22.178462 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:22.178495 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:22.220155 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:22.220188 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:22.794431 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:25.295391 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:23.174050 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:25.673437 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:22.708012 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:25.207491 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:24.771932 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:24.784764 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:24.784851 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:24.820445 1039759 cri.go:89] found id: ""
	I0729 14:40:24.820473 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.820485 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:24.820501 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:24.820569 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:24.854753 1039759 cri.go:89] found id: ""
	I0729 14:40:24.854786 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.854796 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:24.854802 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:24.854856 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:24.889200 1039759 cri.go:89] found id: ""
	I0729 14:40:24.889230 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.889242 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:24.889250 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:24.889312 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:24.932383 1039759 cri.go:89] found id: ""
	I0729 14:40:24.932435 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.932447 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:24.932454 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:24.932515 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:24.971830 1039759 cri.go:89] found id: ""
	I0729 14:40:24.971859 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.971871 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:24.971879 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:24.971936 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:25.014336 1039759 cri.go:89] found id: ""
	I0729 14:40:25.014374 1039759 logs.go:276] 0 containers: []
	W0729 14:40:25.014386 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:25.014397 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:25.014464 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:25.048131 1039759 cri.go:89] found id: ""
	I0729 14:40:25.048161 1039759 logs.go:276] 0 containers: []
	W0729 14:40:25.048171 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:25.048177 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:25.048232 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:25.089830 1039759 cri.go:89] found id: ""
	I0729 14:40:25.089866 1039759 logs.go:276] 0 containers: []
	W0729 14:40:25.089878 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:25.089893 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:25.089907 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:25.172078 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:25.172113 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:25.221629 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:25.221661 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:25.291761 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:25.291806 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:25.314521 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:25.314559 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:25.402738 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:27.903335 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:27.918335 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:27.918413 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:27.951929 1039759 cri.go:89] found id: ""
	I0729 14:40:27.951955 1039759 logs.go:276] 0 containers: []
	W0729 14:40:27.951966 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:27.951972 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:27.952029 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:27.986229 1039759 cri.go:89] found id: ""
	I0729 14:40:27.986266 1039759 logs.go:276] 0 containers: []
	W0729 14:40:27.986279 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:27.986287 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:27.986352 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:28.019467 1039759 cri.go:89] found id: ""
	I0729 14:40:28.019504 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.019517 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:28.019524 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:28.019590 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:28.053762 1039759 cri.go:89] found id: ""
	I0729 14:40:28.053790 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.053799 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:28.053806 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:28.053858 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:28.088947 1039759 cri.go:89] found id: ""
	I0729 14:40:28.088975 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.088989 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:28.088996 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:28.089070 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:28.130018 1039759 cri.go:89] found id: ""
	I0729 14:40:28.130052 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.130064 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:28.130072 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:28.130143 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:28.163682 1039759 cri.go:89] found id: ""
	I0729 14:40:28.163715 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.163725 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:28.163734 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:28.163802 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:28.199698 1039759 cri.go:89] found id: ""
	I0729 14:40:28.199732 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.199744 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:28.199757 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:28.199774 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:28.253735 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:28.253776 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:28.267786 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:28.267825 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:28.337218 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:28.337250 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:28.337265 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:28.419644 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:28.419688 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:27.793963 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:30.293775 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:28.172846 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:30.173544 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:27.707884 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:29.708174 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:32.206661 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:30.958707 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:30.972073 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:30.972146 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:31.016629 1039759 cri.go:89] found id: ""
	I0729 14:40:31.016662 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.016673 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:31.016681 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:31.016747 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:31.058891 1039759 cri.go:89] found id: ""
	I0729 14:40:31.058921 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.058930 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:31.058936 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:31.059004 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:31.096599 1039759 cri.go:89] found id: ""
	I0729 14:40:31.096633 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.096645 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:31.096654 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:31.096741 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:31.143525 1039759 cri.go:89] found id: ""
	I0729 14:40:31.143554 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.143562 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:31.143568 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:31.143628 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:31.180188 1039759 cri.go:89] found id: ""
	I0729 14:40:31.180220 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.180230 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:31.180239 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:31.180310 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:31.219995 1039759 cri.go:89] found id: ""
	I0729 14:40:31.220026 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.220037 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:31.220045 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:31.220108 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:31.254137 1039759 cri.go:89] found id: ""
	I0729 14:40:31.254182 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.254192 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:31.254200 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:31.254272 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:31.288065 1039759 cri.go:89] found id: ""
	I0729 14:40:31.288098 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.288109 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:31.288122 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:31.288137 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:31.341299 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:31.341338 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:31.355357 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:31.355387 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:31.427414 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:31.427439 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:31.427453 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:31.508372 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:31.508439 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:32.294256 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:34.295131 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:32.174315 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:34.674462 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:34.208183 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:36.707763 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:34.052770 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:34.066300 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:34.066366 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:34.104242 1039759 cri.go:89] found id: ""
	I0729 14:40:34.104278 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.104290 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:34.104299 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:34.104367 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:34.143092 1039759 cri.go:89] found id: ""
	I0729 14:40:34.143125 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.143137 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:34.143145 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:34.143216 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:34.177966 1039759 cri.go:89] found id: ""
	I0729 14:40:34.177993 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.178002 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:34.178011 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:34.178098 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:34.218325 1039759 cri.go:89] found id: ""
	I0729 14:40:34.218351 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.218361 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:34.218369 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:34.218437 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:34.256632 1039759 cri.go:89] found id: ""
	I0729 14:40:34.256665 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.256675 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:34.256683 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:34.256753 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:34.290713 1039759 cri.go:89] found id: ""
	I0729 14:40:34.290739 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.290747 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:34.290753 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:34.290816 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:34.331345 1039759 cri.go:89] found id: ""
	I0729 14:40:34.331378 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.331389 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:34.331397 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:34.331468 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:34.370184 1039759 cri.go:89] found id: ""
	I0729 14:40:34.370214 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.370226 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:34.370239 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:34.370256 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:34.448667 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:34.448709 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:34.492943 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:34.492974 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:34.548784 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:34.548827 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:34.565353 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:34.565389 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:34.639411 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:37.140039 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:37.153732 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:37.153806 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:37.189360 1039759 cri.go:89] found id: ""
	I0729 14:40:37.189389 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.189398 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:37.189404 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:37.189474 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:37.225790 1039759 cri.go:89] found id: ""
	I0729 14:40:37.225820 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.225831 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:37.225839 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:37.225914 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:37.261742 1039759 cri.go:89] found id: ""
	I0729 14:40:37.261772 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.261782 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:37.261791 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:37.261862 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:37.295791 1039759 cri.go:89] found id: ""
	I0729 14:40:37.295826 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.295835 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:37.295843 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:37.295908 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:37.331290 1039759 cri.go:89] found id: ""
	I0729 14:40:37.331324 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.331334 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:37.331343 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:37.331413 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:37.366150 1039759 cri.go:89] found id: ""
	I0729 14:40:37.366183 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.366195 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:37.366203 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:37.366273 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:37.400983 1039759 cri.go:89] found id: ""
	I0729 14:40:37.401019 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.401030 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:37.401038 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:37.401110 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:37.435333 1039759 cri.go:89] found id: ""
	I0729 14:40:37.435368 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.435379 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:37.435391 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:37.435407 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:37.488020 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:37.488057 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:37.501543 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:37.501573 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:37.576006 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:37.576033 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:37.576050 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:37.658600 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:37.658641 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:36.794615 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:38.795414 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:37.175174 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:39.674361 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:39.207946 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:41.707724 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:40.200763 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:40.216048 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:40.216121 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:40.253969 1039759 cri.go:89] found id: ""
	I0729 14:40:40.253996 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.254005 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:40.254012 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:40.254078 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:40.289557 1039759 cri.go:89] found id: ""
	I0729 14:40:40.289595 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.289608 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:40.289616 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:40.289698 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:40.329756 1039759 cri.go:89] found id: ""
	I0729 14:40:40.329799 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.329823 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:40.329833 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:40.329906 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:40.365281 1039759 cri.go:89] found id: ""
	I0729 14:40:40.365315 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.365327 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:40.365335 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:40.365403 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:40.401300 1039759 cri.go:89] found id: ""
	I0729 14:40:40.401327 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.401336 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:40.401342 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:40.401398 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:40.435679 1039759 cri.go:89] found id: ""
	I0729 14:40:40.435710 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.435719 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:40.435726 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:40.435781 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:40.475825 1039759 cri.go:89] found id: ""
	I0729 14:40:40.475851 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.475859 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:40.475866 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:40.475926 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:40.512153 1039759 cri.go:89] found id: ""
	I0729 14:40:40.512184 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.512191 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:40.512202 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:40.512215 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:40.563983 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:40.564022 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:40.578823 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:40.578853 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:40.650282 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:40.650311 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:40.650328 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:40.734933 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:40.734980 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:43.280095 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:43.294284 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:43.294361 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:43.328862 1039759 cri.go:89] found id: ""
	I0729 14:40:43.328890 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.328899 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:43.328905 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:43.328971 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:43.366321 1039759 cri.go:89] found id: ""
	I0729 14:40:43.366364 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.366376 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:43.366384 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:43.366459 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:43.400189 1039759 cri.go:89] found id: ""
	I0729 14:40:43.400220 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.400229 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:43.400235 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:43.400299 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:43.438521 1039759 cri.go:89] found id: ""
	I0729 14:40:43.438562 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.438582 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:43.438594 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:43.438665 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:43.473931 1039759 cri.go:89] found id: ""
	I0729 14:40:43.473958 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.473966 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:43.473972 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:43.474035 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:43.511460 1039759 cri.go:89] found id: ""
	I0729 14:40:43.511490 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.511497 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:43.511506 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:43.511563 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:43.547255 1039759 cri.go:89] found id: ""
	I0729 14:40:43.547290 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.547301 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:43.547309 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:43.547375 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:43.582384 1039759 cri.go:89] found id: ""
	I0729 14:40:43.582418 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.582428 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:43.582441 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:43.582459 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:43.595747 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:43.595780 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:43.665389 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:43.665413 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:43.665427 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:43.752669 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:43.752712 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:43.797239 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:43.797272 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:41.294242 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:43.294985 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:45.794449 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:42.173495 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:44.173830 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:44.207593 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:46.706855 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:46.352841 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:46.368204 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:46.368278 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:46.406661 1039759 cri.go:89] found id: ""
	I0729 14:40:46.406687 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.406695 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:46.406701 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:46.406761 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:46.443728 1039759 cri.go:89] found id: ""
	I0729 14:40:46.443760 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.443771 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:46.443778 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:46.443845 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:46.477632 1039759 cri.go:89] found id: ""
	I0729 14:40:46.477666 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.477677 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:46.477686 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:46.477754 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:46.512510 1039759 cri.go:89] found id: ""
	I0729 14:40:46.512538 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.512549 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:46.512557 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:46.512629 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:46.550803 1039759 cri.go:89] found id: ""
	I0729 14:40:46.550834 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.550843 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:46.550848 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:46.550914 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:46.591610 1039759 cri.go:89] found id: ""
	I0729 14:40:46.591640 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.591652 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:46.591661 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:46.591723 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:46.631090 1039759 cri.go:89] found id: ""
	I0729 14:40:46.631122 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.631132 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:46.631139 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:46.631199 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:46.670510 1039759 cri.go:89] found id: ""
	I0729 14:40:46.670542 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.670554 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:46.670573 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:46.670590 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:46.725560 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:46.725594 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:46.739348 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:46.739372 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:46.812850 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:46.812874 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:46.812892 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:46.892922 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:46.892964 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:47.795538 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:50.293685 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:46.674514 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:49.174577 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:48.708243 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:51.207168 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:49.438741 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:49.452505 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:49.452588 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:49.487294 1039759 cri.go:89] found id: ""
	I0729 14:40:49.487323 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.487331 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:49.487340 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:49.487407 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:49.521783 1039759 cri.go:89] found id: ""
	I0729 14:40:49.521816 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.521828 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:49.521836 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:49.521901 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:49.557039 1039759 cri.go:89] found id: ""
	I0729 14:40:49.557075 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.557086 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:49.557094 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:49.557162 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:49.590431 1039759 cri.go:89] found id: ""
	I0729 14:40:49.590462 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.590474 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:49.590494 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:49.590574 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:49.626230 1039759 cri.go:89] found id: ""
	I0729 14:40:49.626260 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.626268 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:49.626274 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:49.626339 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:49.662030 1039759 cri.go:89] found id: ""
	I0729 14:40:49.662060 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.662068 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:49.662075 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:49.662130 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:49.699988 1039759 cri.go:89] found id: ""
	I0729 14:40:49.700019 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.700035 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:49.700076 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:49.700144 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:49.736830 1039759 cri.go:89] found id: ""
	I0729 14:40:49.736864 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.736873 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:49.736882 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:49.736895 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:49.775670 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:49.775703 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:49.830820 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:49.830853 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:49.846374 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:49.846407 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:49.917475 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:49.917502 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:49.917520 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:52.499291 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:52.513571 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:52.513641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:52.547437 1039759 cri.go:89] found id: ""
	I0729 14:40:52.547474 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.547487 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:52.547495 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:52.547559 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:52.587664 1039759 cri.go:89] found id: ""
	I0729 14:40:52.587705 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.587718 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:52.587726 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:52.587799 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:52.630642 1039759 cri.go:89] found id: ""
	I0729 14:40:52.630670 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.630678 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:52.630684 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:52.630740 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:52.665978 1039759 cri.go:89] found id: ""
	I0729 14:40:52.666010 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.666022 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:52.666030 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:52.666103 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:52.701111 1039759 cri.go:89] found id: ""
	I0729 14:40:52.701140 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.701148 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:52.701155 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:52.701211 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:52.744219 1039759 cri.go:89] found id: ""
	I0729 14:40:52.744247 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.744257 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:52.744265 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:52.744329 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:52.781081 1039759 cri.go:89] found id: ""
	I0729 14:40:52.781113 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.781122 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:52.781128 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:52.781198 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:52.817938 1039759 cri.go:89] found id: ""
	I0729 14:40:52.817974 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.817985 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:52.817999 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:52.818016 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:52.895387 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:52.895416 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:52.895433 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:52.976313 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:52.976356 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:53.013814 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:53.013852 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:53.065901 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:53.065937 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:52.798083 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:55.293459 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:51.674103 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:54.174456 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:53.208082 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:55.707719 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:55.580590 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:55.595023 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:55.595108 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:55.631449 1039759 cri.go:89] found id: ""
	I0729 14:40:55.631479 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.631487 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:55.631494 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:55.631551 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:55.666245 1039759 cri.go:89] found id: ""
	I0729 14:40:55.666274 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.666283 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:55.666296 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:55.666364 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:55.706582 1039759 cri.go:89] found id: ""
	I0729 14:40:55.706611 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.706621 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:55.706629 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:55.706696 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:55.741930 1039759 cri.go:89] found id: ""
	I0729 14:40:55.741962 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.741973 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:55.741990 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:55.742058 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:55.781440 1039759 cri.go:89] found id: ""
	I0729 14:40:55.781475 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.781486 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:55.781494 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:55.781599 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:55.825329 1039759 cri.go:89] found id: ""
	I0729 14:40:55.825366 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.825377 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:55.825387 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:55.825466 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:55.860834 1039759 cri.go:89] found id: ""
	I0729 14:40:55.860866 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.860878 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:55.860886 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:55.860950 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:55.895460 1039759 cri.go:89] found id: ""
	I0729 14:40:55.895492 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.895502 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:55.895514 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:55.895531 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:55.951739 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:55.951781 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:55.965760 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:55.965792 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:56.044422 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:56.044458 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:56.044477 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:56.123669 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:56.123714 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:58.668279 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:58.682912 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:58.682974 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:58.718757 1039759 cri.go:89] found id: ""
	I0729 14:40:58.718787 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.718798 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:58.718807 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:58.718861 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:58.756986 1039759 cri.go:89] found id: ""
	I0729 14:40:58.757015 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.757025 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:58.757031 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:58.757092 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:58.797572 1039759 cri.go:89] found id: ""
	I0729 14:40:58.797600 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.797611 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:58.797620 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:58.797689 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:58.839410 1039759 cri.go:89] found id: ""
	I0729 14:40:58.839442 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.839453 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:58.839461 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:58.839523 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:57.293935 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:59.294805 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:56.673078 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:58.674177 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:01.173709 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:57.708051 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:00.207822 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:02.208128 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:58.874477 1039759 cri.go:89] found id: ""
	I0729 14:40:58.874508 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.874519 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:58.874528 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:58.874602 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:58.910248 1039759 cri.go:89] found id: ""
	I0729 14:40:58.910281 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.910296 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:58.910307 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:58.910368 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:58.944845 1039759 cri.go:89] found id: ""
	I0729 14:40:58.944879 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.944890 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:58.944896 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:58.944955 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:58.978818 1039759 cri.go:89] found id: ""
	I0729 14:40:58.978854 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.978867 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:58.978879 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:58.978898 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:59.018961 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:59.018993 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:59.069883 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:59.069920 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:59.083277 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:59.083304 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:59.159470 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:59.159494 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:59.159511 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:01.746915 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:01.759883 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:01.759949 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:01.796563 1039759 cri.go:89] found id: ""
	I0729 14:41:01.796589 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.796602 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:01.796608 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:01.796691 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:01.831464 1039759 cri.go:89] found id: ""
	I0729 14:41:01.831499 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.831511 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:01.831520 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:01.831586 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:01.868633 1039759 cri.go:89] found id: ""
	I0729 14:41:01.868660 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.868668 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:01.868674 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:01.868732 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:01.903154 1039759 cri.go:89] found id: ""
	I0729 14:41:01.903183 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.903194 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:01.903202 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:01.903272 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:01.938256 1039759 cri.go:89] found id: ""
	I0729 14:41:01.938292 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.938304 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:01.938312 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:01.938384 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:01.978117 1039759 cri.go:89] found id: ""
	I0729 14:41:01.978147 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.978159 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:01.978168 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:01.978242 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:02.014061 1039759 cri.go:89] found id: ""
	I0729 14:41:02.014089 1039759 logs.go:276] 0 containers: []
	W0729 14:41:02.014100 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:02.014108 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:02.014176 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:02.050133 1039759 cri.go:89] found id: ""
	I0729 14:41:02.050165 1039759 logs.go:276] 0 containers: []
	W0729 14:41:02.050177 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:02.050189 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:02.050206 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:02.101188 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:02.101253 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:02.114343 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:02.114369 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:02.190309 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:02.190338 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:02.190354 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:02.266895 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:02.266939 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:01.794976 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:04.295199 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:03.176713 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:05.673543 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:04.708032 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:07.207702 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:04.809474 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:04.824652 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:04.824725 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:04.858442 1039759 cri.go:89] found id: ""
	I0729 14:41:04.858474 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.858483 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:04.858490 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:04.858542 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:04.893199 1039759 cri.go:89] found id: ""
	I0729 14:41:04.893229 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.893237 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:04.893243 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:04.893297 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:04.929480 1039759 cri.go:89] found id: ""
	I0729 14:41:04.929512 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.929524 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:04.929532 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:04.929601 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:04.965097 1039759 cri.go:89] found id: ""
	I0729 14:41:04.965127 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.965139 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:04.965147 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:04.965228 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:05.003419 1039759 cri.go:89] found id: ""
	I0729 14:41:05.003449 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.003460 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:05.003467 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:05.003557 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:05.037408 1039759 cri.go:89] found id: ""
	I0729 14:41:05.037439 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.037451 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:05.037458 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:05.037527 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:05.072909 1039759 cri.go:89] found id: ""
	I0729 14:41:05.072942 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.072953 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:05.072961 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:05.073034 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:05.123731 1039759 cri.go:89] found id: ""
	I0729 14:41:05.123764 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.123776 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:05.123787 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:05.123802 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:05.188687 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:05.188732 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:05.204119 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:05.204160 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:05.294702 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:05.294732 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:05.294750 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:05.377412 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:05.377456 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:07.923437 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:07.937633 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:07.937711 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:07.976813 1039759 cri.go:89] found id: ""
	I0729 14:41:07.976850 1039759 logs.go:276] 0 containers: []
	W0729 14:41:07.976861 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:07.976872 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:07.976946 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:08.013051 1039759 cri.go:89] found id: ""
	I0729 14:41:08.013089 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.013100 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:08.013109 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:08.013177 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:08.047372 1039759 cri.go:89] found id: ""
	I0729 14:41:08.047404 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.047413 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:08.047420 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:08.047477 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:08.080555 1039759 cri.go:89] found id: ""
	I0729 14:41:08.080594 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.080607 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:08.080615 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:08.080684 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:08.117054 1039759 cri.go:89] found id: ""
	I0729 14:41:08.117087 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.117098 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:08.117106 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:08.117175 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:08.152270 1039759 cri.go:89] found id: ""
	I0729 14:41:08.152295 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.152303 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:08.152309 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:08.152373 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:08.188804 1039759 cri.go:89] found id: ""
	I0729 14:41:08.188830 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.188842 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:08.188848 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:08.188903 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:08.225101 1039759 cri.go:89] found id: ""
	I0729 14:41:08.225139 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.225151 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:08.225164 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:08.225182 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:08.278721 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:08.278759 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:08.293417 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:08.293453 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:08.371802 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:08.371825 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:08.371843 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:08.452233 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:08.452274 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:06.795598 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:09.294006 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:08.175147 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:10.673937 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:09.707777 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:12.208180 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:10.993379 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:11.007599 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:11.007668 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:11.045603 1039759 cri.go:89] found id: ""
	I0729 14:41:11.045652 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.045675 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:11.045683 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:11.045746 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:11.079682 1039759 cri.go:89] found id: ""
	I0729 14:41:11.079711 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.079722 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:11.079730 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:11.079797 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:11.122138 1039759 cri.go:89] found id: ""
	I0729 14:41:11.122167 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.122180 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:11.122185 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:11.122249 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:11.157416 1039759 cri.go:89] found id: ""
	I0729 14:41:11.157444 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.157452 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:11.157458 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:11.157514 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:11.198589 1039759 cri.go:89] found id: ""
	I0729 14:41:11.198631 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.198643 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:11.198652 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:11.198725 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:11.238329 1039759 cri.go:89] found id: ""
	I0729 14:41:11.238360 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.238369 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:11.238376 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:11.238442 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:11.273283 1039759 cri.go:89] found id: ""
	I0729 14:41:11.273313 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.273322 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:11.273328 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:11.273382 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:11.313927 1039759 cri.go:89] found id: ""
	I0729 14:41:11.313972 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.313984 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:11.313997 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:11.314014 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:11.366507 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:11.366546 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:11.380529 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:11.380566 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:11.451839 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:11.451862 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:11.451882 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:11.537109 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:11.537150 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:11.294967 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:13.793738 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:13.173482 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:15.673025 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:14.706708 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:16.707135 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:14.104794 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:14.117474 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:14.117541 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:14.154117 1039759 cri.go:89] found id: ""
	I0729 14:41:14.154151 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.154163 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:14.154171 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:14.154236 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:14.195762 1039759 cri.go:89] found id: ""
	I0729 14:41:14.195793 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.195804 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:14.195812 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:14.195875 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:14.231434 1039759 cri.go:89] found id: ""
	I0729 14:41:14.231460 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.231467 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:14.231474 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:14.231523 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:14.264802 1039759 cri.go:89] found id: ""
	I0729 14:41:14.264839 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.264851 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:14.264859 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:14.264932 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:14.300162 1039759 cri.go:89] found id: ""
	I0729 14:41:14.300184 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.300194 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:14.300202 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:14.300262 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:14.335351 1039759 cri.go:89] found id: ""
	I0729 14:41:14.335385 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.335396 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:14.335404 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:14.335468 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:14.370064 1039759 cri.go:89] found id: ""
	I0729 14:41:14.370096 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.370107 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:14.370115 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:14.370184 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:14.406506 1039759 cri.go:89] found id: ""
	I0729 14:41:14.406538 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.406549 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:14.406562 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:14.406579 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:14.445641 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:14.445681 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:14.496132 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:14.496165 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:14.509732 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:14.509767 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:14.581519 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:14.581541 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:14.581558 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:17.164487 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:17.178359 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:17.178447 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:17.213780 1039759 cri.go:89] found id: ""
	I0729 14:41:17.213869 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.213887 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:17.213896 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:17.213966 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:17.251006 1039759 cri.go:89] found id: ""
	I0729 14:41:17.251045 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.251056 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:17.251063 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:17.251135 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:17.306624 1039759 cri.go:89] found id: ""
	I0729 14:41:17.306654 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.306683 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:17.306691 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:17.306775 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:17.358882 1039759 cri.go:89] found id: ""
	I0729 14:41:17.358915 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.358927 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:17.358935 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:17.359008 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:17.408592 1039759 cri.go:89] found id: ""
	I0729 14:41:17.408620 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.408636 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:17.408642 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:17.408705 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:17.445201 1039759 cri.go:89] found id: ""
	I0729 14:41:17.445228 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.445236 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:17.445242 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:17.445305 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:17.477441 1039759 cri.go:89] found id: ""
	I0729 14:41:17.477483 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.477511 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:17.477518 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:17.477591 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:17.509148 1039759 cri.go:89] found id: ""
	I0729 14:41:17.509179 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.509190 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:17.509203 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:17.509220 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:17.559784 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:17.559823 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:17.574163 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:17.574199 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:17.644249 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:17.644277 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:17.644294 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:17.720652 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:17.720688 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:16.293977 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:18.793489 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:20.793760 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:17.674099 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:20.173742 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:18.707238 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:21.209948 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:20.261591 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:20.274649 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:20.274731 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:20.311561 1039759 cri.go:89] found id: ""
	I0729 14:41:20.311591 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.311600 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:20.311606 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:20.311668 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:20.350267 1039759 cri.go:89] found id: ""
	I0729 14:41:20.350300 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.350313 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:20.350322 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:20.350379 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:20.384183 1039759 cri.go:89] found id: ""
	I0729 14:41:20.384213 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.384220 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:20.384227 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:20.384288 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:20.422330 1039759 cri.go:89] found id: ""
	I0729 14:41:20.422358 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.422367 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:20.422373 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:20.422442 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:20.465537 1039759 cri.go:89] found id: ""
	I0729 14:41:20.465568 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.465577 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:20.465586 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:20.465663 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:20.507661 1039759 cri.go:89] found id: ""
	I0729 14:41:20.507691 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.507701 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:20.507710 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:20.507774 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:20.545830 1039759 cri.go:89] found id: ""
	I0729 14:41:20.545857 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.545866 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:20.545872 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:20.545936 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:20.586311 1039759 cri.go:89] found id: ""
	I0729 14:41:20.586345 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.586354 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:20.586364 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:20.586379 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:20.635183 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:20.635224 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:20.649660 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:20.649701 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:20.729588 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:20.729613 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:20.729632 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:20.811565 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:20.811605 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:23.354318 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:23.367784 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:23.367862 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:23.401929 1039759 cri.go:89] found id: ""
	I0729 14:41:23.401956 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.401965 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:23.401970 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:23.402033 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:23.437130 1039759 cri.go:89] found id: ""
	I0729 14:41:23.437161 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.437185 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:23.437205 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:23.437267 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:23.474029 1039759 cri.go:89] found id: ""
	I0729 14:41:23.474066 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.474078 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:23.474087 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:23.474159 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:23.506678 1039759 cri.go:89] found id: ""
	I0729 14:41:23.506714 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.506725 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:23.506732 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:23.506791 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:23.541578 1039759 cri.go:89] found id: ""
	I0729 14:41:23.541618 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.541628 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:23.541636 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:23.541709 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:23.575852 1039759 cri.go:89] found id: ""
	I0729 14:41:23.575883 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.575891 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:23.575898 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:23.575955 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:23.610611 1039759 cri.go:89] found id: ""
	I0729 14:41:23.610638 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.610646 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:23.610653 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:23.610717 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:23.650403 1039759 cri.go:89] found id: ""
	I0729 14:41:23.650429 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.650438 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:23.650448 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:23.650460 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:23.701856 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:23.701899 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:23.716925 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:23.716958 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:23.790678 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:23.790699 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:23.790717 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:23.873204 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:23.873242 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:22.794021 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:25.294289 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:22.173787 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:24.673139 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:23.708892 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:26.207121 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:26.414319 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:26.428069 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:26.428152 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:26.462538 1039759 cri.go:89] found id: ""
	I0729 14:41:26.462578 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.462590 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:26.462599 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:26.462687 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:26.496461 1039759 cri.go:89] found id: ""
	I0729 14:41:26.496501 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.496513 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:26.496521 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:26.496593 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:26.534152 1039759 cri.go:89] found id: ""
	I0729 14:41:26.534190 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.534203 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:26.534210 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:26.534273 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:26.572986 1039759 cri.go:89] found id: ""
	I0729 14:41:26.573016 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.573024 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:26.573030 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:26.573097 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:26.607330 1039759 cri.go:89] found id: ""
	I0729 14:41:26.607359 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.607370 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:26.607378 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:26.607445 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:26.643023 1039759 cri.go:89] found id: ""
	I0729 14:41:26.643056 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.643067 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:26.643078 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:26.643145 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:26.679820 1039759 cri.go:89] found id: ""
	I0729 14:41:26.679846 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.679856 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:26.679865 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:26.679930 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:26.716433 1039759 cri.go:89] found id: ""
	I0729 14:41:26.716462 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.716470 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:26.716480 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:26.716494 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:26.794508 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:26.794529 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:26.794542 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:26.876663 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:26.876701 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:26.917309 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:26.917343 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:26.969397 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:26.969436 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:27.294711 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:29.793946 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:26.679220 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:29.173259 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:31.175213 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:28.207613 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:30.707297 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:29.483935 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:29.497502 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:29.497585 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:29.532671 1039759 cri.go:89] found id: ""
	I0729 14:41:29.532698 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.532712 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:29.532719 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:29.532784 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:29.568058 1039759 cri.go:89] found id: ""
	I0729 14:41:29.568085 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.568096 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:29.568103 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:29.568176 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:29.601173 1039759 cri.go:89] found id: ""
	I0729 14:41:29.601206 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.601216 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:29.601225 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:29.601284 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:29.634333 1039759 cri.go:89] found id: ""
	I0729 14:41:29.634372 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.634384 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:29.634393 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:29.634460 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:29.669669 1039759 cri.go:89] found id: ""
	I0729 14:41:29.669698 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.669706 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:29.669712 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:29.669777 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:29.702847 1039759 cri.go:89] found id: ""
	I0729 14:41:29.702876 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.702886 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:29.702894 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:29.702960 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:29.740713 1039759 cri.go:89] found id: ""
	I0729 14:41:29.740743 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.740754 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:29.740762 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:29.740846 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:29.777795 1039759 cri.go:89] found id: ""
	I0729 14:41:29.777829 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.777841 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:29.777853 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:29.777869 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:29.858713 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:29.858758 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:29.896873 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:29.896914 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:29.946905 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:29.946945 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:29.960136 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:29.960170 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:30.035951 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:32.536130 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:32.549431 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:32.549501 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:32.586069 1039759 cri.go:89] found id: ""
	I0729 14:41:32.586098 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.586117 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:32.586125 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:32.586183 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:32.623094 1039759 cri.go:89] found id: ""
	I0729 14:41:32.623123 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.623132 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:32.623138 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:32.623205 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:32.658370 1039759 cri.go:89] found id: ""
	I0729 14:41:32.658406 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.658418 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:32.658426 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:32.658492 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:32.696436 1039759 cri.go:89] found id: ""
	I0729 14:41:32.696469 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.696478 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:32.696484 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:32.696551 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:32.731306 1039759 cri.go:89] found id: ""
	I0729 14:41:32.731340 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.731352 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:32.731361 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:32.731431 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:32.767049 1039759 cri.go:89] found id: ""
	I0729 14:41:32.767087 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.767098 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:32.767106 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:32.767179 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:32.805094 1039759 cri.go:89] found id: ""
	I0729 14:41:32.805126 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.805138 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:32.805147 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:32.805223 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:32.840088 1039759 cri.go:89] found id: ""
	I0729 14:41:32.840116 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.840125 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:32.840137 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:32.840155 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:32.854065 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:32.854095 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:32.921447 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:32.921477 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:32.921493 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:33.005086 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:33.005129 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:33.042555 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:33.042617 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:31.795000 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:34.293349 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:33.673734 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:35.674275 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:32.707849 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:35.210238 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:35.593173 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:35.605965 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:35.606031 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:35.639315 1039759 cri.go:89] found id: ""
	I0729 14:41:35.639355 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.639367 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:35.639374 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:35.639466 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:35.678657 1039759 cri.go:89] found id: ""
	I0729 14:41:35.678686 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.678695 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:35.678700 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:35.678764 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:35.714108 1039759 cri.go:89] found id: ""
	I0729 14:41:35.714136 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.714147 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:35.714155 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:35.714220 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:35.748793 1039759 cri.go:89] found id: ""
	I0729 14:41:35.748820 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.748831 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:35.748837 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:35.748891 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:35.788853 1039759 cri.go:89] found id: ""
	I0729 14:41:35.788884 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.788895 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:35.788903 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:35.788971 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:35.825032 1039759 cri.go:89] found id: ""
	I0729 14:41:35.825059 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.825067 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:35.825074 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:35.825126 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:35.859990 1039759 cri.go:89] found id: ""
	I0729 14:41:35.860022 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.860033 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:35.860041 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:35.860131 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:35.894318 1039759 cri.go:89] found id: ""
	I0729 14:41:35.894352 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.894364 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:35.894377 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:35.894393 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:35.907591 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:35.907617 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:35.975000 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:35.975023 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:35.975040 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:36.056188 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:36.056226 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:36.094569 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:36.094606 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:38.648685 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:38.661546 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:38.661612 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:38.698658 1039759 cri.go:89] found id: ""
	I0729 14:41:38.698692 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.698704 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:38.698711 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:38.698797 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:38.731239 1039759 cri.go:89] found id: ""
	I0729 14:41:38.731274 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.731282 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:38.731288 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:38.731341 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:38.766549 1039759 cri.go:89] found id: ""
	I0729 14:41:38.766583 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.766594 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:38.766602 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:38.766663 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:38.803347 1039759 cri.go:89] found id: ""
	I0729 14:41:38.803374 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.803385 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:38.803393 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:38.803467 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:38.840327 1039759 cri.go:89] found id: ""
	I0729 14:41:38.840363 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.840374 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:38.840384 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:38.840480 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:38.874181 1039759 cri.go:89] found id: ""
	I0729 14:41:38.874211 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.874219 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:38.874225 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:38.874293 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:36.297301 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:38.794975 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:38.173718 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:40.675880 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:37.707171 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:39.709125 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:42.206569 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:38.908642 1039759 cri.go:89] found id: ""
	I0729 14:41:38.908674 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.908686 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:38.908694 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:38.908762 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:38.945081 1039759 cri.go:89] found id: ""
	I0729 14:41:38.945107 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.945116 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:38.945126 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:38.945140 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:38.999792 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:38.999826 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:39.013396 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:39.013421 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:39.077975 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:39.077998 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:39.078016 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:39.169606 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:39.169654 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:41.716258 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:41.730508 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:41.730579 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:41.766457 1039759 cri.go:89] found id: ""
	I0729 14:41:41.766490 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.766498 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:41.766505 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:41.766571 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:41.801073 1039759 cri.go:89] found id: ""
	I0729 14:41:41.801099 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.801109 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:41.801117 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:41.801178 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:41.836962 1039759 cri.go:89] found id: ""
	I0729 14:41:41.836986 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.836997 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:41.837005 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:41.837072 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:41.870169 1039759 cri.go:89] found id: ""
	I0729 14:41:41.870195 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.870205 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:41.870213 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:41.870274 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:41.902298 1039759 cri.go:89] found id: ""
	I0729 14:41:41.902323 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.902331 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:41.902337 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:41.902387 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:41.935394 1039759 cri.go:89] found id: ""
	I0729 14:41:41.935429 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.935441 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:41.935449 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:41.935513 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:41.972397 1039759 cri.go:89] found id: ""
	I0729 14:41:41.972437 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.972448 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:41.972456 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:41.972525 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:42.006477 1039759 cri.go:89] found id: ""
	I0729 14:41:42.006503 1039759 logs.go:276] 0 containers: []
	W0729 14:41:42.006513 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:42.006526 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:42.006540 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:42.053853 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:42.053886 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:42.067143 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:42.067172 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:42.135406 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:42.135432 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:42.135449 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:42.212571 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:42.212603 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:41.293241 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:43.294160 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:45.793697 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:43.173087 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:45.174327 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:44.206854 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:46.707167 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:44.751283 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:44.764600 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:44.764688 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:44.800821 1039759 cri.go:89] found id: ""
	I0729 14:41:44.800850 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.800857 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:44.800863 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:44.800924 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:44.834638 1039759 cri.go:89] found id: ""
	I0729 14:41:44.834670 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.834680 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:44.834686 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:44.834744 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:44.870198 1039759 cri.go:89] found id: ""
	I0729 14:41:44.870225 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.870237 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:44.870245 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:44.870312 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:44.904588 1039759 cri.go:89] found id: ""
	I0729 14:41:44.904620 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.904631 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:44.904639 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:44.904713 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:44.939442 1039759 cri.go:89] found id: ""
	I0729 14:41:44.939467 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.939474 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:44.939480 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:44.939541 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:44.972771 1039759 cri.go:89] found id: ""
	I0729 14:41:44.972799 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.972808 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:44.972815 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:44.972888 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:45.007513 1039759 cri.go:89] found id: ""
	I0729 14:41:45.007540 1039759 logs.go:276] 0 containers: []
	W0729 14:41:45.007549 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:45.007557 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:45.007626 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:45.038752 1039759 cri.go:89] found id: ""
	I0729 14:41:45.038778 1039759 logs.go:276] 0 containers: []
	W0729 14:41:45.038787 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:45.038797 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:45.038821 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:45.089807 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:45.089838 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:45.103188 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:45.103221 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:45.174509 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:45.174532 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:45.174554 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:45.255288 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:45.255327 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:47.799207 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:47.814781 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:47.814866 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:47.855111 1039759 cri.go:89] found id: ""
	I0729 14:41:47.855143 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.855156 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:47.855164 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:47.855230 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:47.892542 1039759 cri.go:89] found id: ""
	I0729 14:41:47.892577 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.892589 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:47.892603 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:47.892674 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:47.933408 1039759 cri.go:89] found id: ""
	I0729 14:41:47.933439 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.933451 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:47.933458 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:47.933531 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:47.970397 1039759 cri.go:89] found id: ""
	I0729 14:41:47.970427 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.970439 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:47.970447 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:47.970514 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:48.006852 1039759 cri.go:89] found id: ""
	I0729 14:41:48.006880 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.006891 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:48.006899 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:48.006967 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:48.046766 1039759 cri.go:89] found id: ""
	I0729 14:41:48.046799 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.046811 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:48.046820 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:48.046893 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:48.084354 1039759 cri.go:89] found id: ""
	I0729 14:41:48.084380 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.084387 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:48.084393 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:48.084468 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:48.121526 1039759 cri.go:89] found id: ""
	I0729 14:41:48.121559 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.121571 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:48.121582 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:48.121606 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:48.136753 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:48.136784 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:48.206914 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:48.206942 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:48.206958 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:48.283843 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:48.283882 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:48.325845 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:48.325878 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:47.794096 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:50.295275 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:47.182903 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:49.672827 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:49.206572 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:51.206900 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:50.881346 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:50.894098 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:50.894177 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:50.927345 1039759 cri.go:89] found id: ""
	I0729 14:41:50.927375 1039759 logs.go:276] 0 containers: []
	W0729 14:41:50.927386 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:50.927399 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:50.927466 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:50.962700 1039759 cri.go:89] found id: ""
	I0729 14:41:50.962726 1039759 logs.go:276] 0 containers: []
	W0729 14:41:50.962734 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:50.962740 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:50.962804 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:50.997299 1039759 cri.go:89] found id: ""
	I0729 14:41:50.997334 1039759 logs.go:276] 0 containers: []
	W0729 14:41:50.997346 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:50.997354 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:50.997419 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:51.030157 1039759 cri.go:89] found id: ""
	I0729 14:41:51.030190 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.030202 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:51.030211 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:51.030288 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:51.063123 1039759 cri.go:89] found id: ""
	I0729 14:41:51.063151 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.063162 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:51.063170 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:51.063237 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:51.096772 1039759 cri.go:89] found id: ""
	I0729 14:41:51.096819 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.096830 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:51.096838 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:51.096912 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:51.131976 1039759 cri.go:89] found id: ""
	I0729 14:41:51.132004 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.132014 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:51.132022 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:51.132095 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:51.167560 1039759 cri.go:89] found id: ""
	I0729 14:41:51.167599 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.167610 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:51.167622 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:51.167640 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:51.229416 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:51.229455 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:51.243576 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:51.243604 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:51.311103 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:51.311123 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:51.311139 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:51.396369 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:51.396432 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:52.793981 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:55.294172 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:51.673945 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:54.173681 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:56.174098 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:53.207656 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:55.709310 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:53.942329 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:53.955960 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:53.956027 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:53.988039 1039759 cri.go:89] found id: ""
	I0729 14:41:53.988074 1039759 logs.go:276] 0 containers: []
	W0729 14:41:53.988085 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:53.988094 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:53.988162 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:54.020948 1039759 cri.go:89] found id: ""
	I0729 14:41:54.020981 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.020992 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:54.020999 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:54.021067 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:54.053716 1039759 cri.go:89] found id: ""
	I0729 14:41:54.053744 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.053752 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:54.053759 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:54.053811 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:54.092348 1039759 cri.go:89] found id: ""
	I0729 14:41:54.092378 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.092390 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:54.092398 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:54.092471 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:54.126114 1039759 cri.go:89] found id: ""
	I0729 14:41:54.126176 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.126189 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:54.126199 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:54.126316 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:54.162125 1039759 cri.go:89] found id: ""
	I0729 14:41:54.162157 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.162167 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:54.162174 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:54.162241 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:54.202407 1039759 cri.go:89] found id: ""
	I0729 14:41:54.202439 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.202448 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:54.202456 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:54.202522 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:54.238650 1039759 cri.go:89] found id: ""
	I0729 14:41:54.238684 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.238695 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:54.238704 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:54.238718 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:54.291200 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:54.291243 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:54.306381 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:54.306415 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:54.371355 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:54.371384 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:54.371399 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:54.455200 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:54.455237 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:56.994689 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:57.007893 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:57.007958 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:57.041775 1039759 cri.go:89] found id: ""
	I0729 14:41:57.041808 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.041820 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:57.041828 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:57.041894 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:57.075409 1039759 cri.go:89] found id: ""
	I0729 14:41:57.075442 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.075454 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:57.075462 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:57.075524 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:57.120963 1039759 cri.go:89] found id: ""
	I0729 14:41:57.121000 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.121011 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:57.121019 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:57.121088 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:57.164882 1039759 cri.go:89] found id: ""
	I0729 14:41:57.164912 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.164923 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:57.164932 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:57.165001 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:57.198511 1039759 cri.go:89] found id: ""
	I0729 14:41:57.198537 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.198545 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:57.198550 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:57.198604 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:57.238516 1039759 cri.go:89] found id: ""
	I0729 14:41:57.238544 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.238552 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:57.238559 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:57.238622 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:57.271823 1039759 cri.go:89] found id: ""
	I0729 14:41:57.271854 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.271865 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:57.271873 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:57.271937 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:57.308435 1039759 cri.go:89] found id: ""
	I0729 14:41:57.308460 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.308472 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:57.308483 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:57.308506 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:57.359783 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:57.359818 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:57.372669 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:57.372698 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:57.440979 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:57.441004 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:57.441018 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:57.520105 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:57.520139 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:57.295421 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:59.793704 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:58.673850 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:01.172547 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:58.207493 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:00.208108 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:02.208334 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:00.060542 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:00.076125 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:00.076192 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:00.113095 1039759 cri.go:89] found id: ""
	I0729 14:42:00.113129 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.113137 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:00.113150 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:00.113206 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:00.154104 1039759 cri.go:89] found id: ""
	I0729 14:42:00.154132 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.154139 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:00.154146 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:00.154202 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:00.190416 1039759 cri.go:89] found id: ""
	I0729 14:42:00.190443 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.190454 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:00.190462 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:00.190532 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:00.228138 1039759 cri.go:89] found id: ""
	I0729 14:42:00.228173 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.228185 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:00.228192 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:00.228261 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:00.265679 1039759 cri.go:89] found id: ""
	I0729 14:42:00.265706 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.265715 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:00.265721 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:00.265787 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:00.300283 1039759 cri.go:89] found id: ""
	I0729 14:42:00.300315 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.300333 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:00.300341 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:00.300433 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:00.339224 1039759 cri.go:89] found id: ""
	I0729 14:42:00.339255 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.339264 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:00.339270 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:00.339333 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:00.375780 1039759 cri.go:89] found id: ""
	I0729 14:42:00.375815 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.375826 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:00.375836 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:00.375851 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:00.425145 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:00.425190 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:00.438860 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:00.438891 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:00.512668 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:00.512695 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:00.512714 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:00.597083 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:00.597139 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:03.141962 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:03.156295 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:03.156372 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:03.192860 1039759 cri.go:89] found id: ""
	I0729 14:42:03.192891 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.192902 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:03.192911 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:03.192982 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:03.234078 1039759 cri.go:89] found id: ""
	I0729 14:42:03.234104 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.234113 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:03.234119 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:03.234171 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:03.268099 1039759 cri.go:89] found id: ""
	I0729 14:42:03.268124 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.268131 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:03.268138 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:03.268197 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:03.306470 1039759 cri.go:89] found id: ""
	I0729 14:42:03.306498 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.306507 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:03.306513 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:03.306596 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:03.341902 1039759 cri.go:89] found id: ""
	I0729 14:42:03.341933 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.341944 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:03.341952 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:03.342019 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:03.377235 1039759 cri.go:89] found id: ""
	I0729 14:42:03.377271 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.377282 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:03.377291 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:03.377355 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:03.411273 1039759 cri.go:89] found id: ""
	I0729 14:42:03.411308 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.411316 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:03.411322 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:03.411397 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:03.446482 1039759 cri.go:89] found id: ""
	I0729 14:42:03.446511 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.446519 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:03.446530 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:03.446545 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:03.460222 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:03.460262 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:03.548149 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:03.548175 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:03.548191 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:03.640563 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:03.640608 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:03.681685 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:03.681713 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:02.293412 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:04.793239 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:03.174082 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:05.674438 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:04.706798 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:06.707818 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:06.234967 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:06.249656 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:06.249726 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:06.284768 1039759 cri.go:89] found id: ""
	I0729 14:42:06.284798 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.284810 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:06.284822 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:06.284880 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:06.321109 1039759 cri.go:89] found id: ""
	I0729 14:42:06.321140 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.321150 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:06.321158 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:06.321229 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:06.357238 1039759 cri.go:89] found id: ""
	I0729 14:42:06.357269 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.357278 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:06.357284 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:06.357342 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:06.391613 1039759 cri.go:89] found id: ""
	I0729 14:42:06.391643 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.391653 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:06.391661 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:06.391726 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:06.428782 1039759 cri.go:89] found id: ""
	I0729 14:42:06.428813 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.428823 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:06.428831 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:06.428890 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:06.463558 1039759 cri.go:89] found id: ""
	I0729 14:42:06.463596 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.463607 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:06.463615 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:06.463683 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:06.500442 1039759 cri.go:89] found id: ""
	I0729 14:42:06.500474 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.500484 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:06.500501 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:06.500579 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:06.535589 1039759 cri.go:89] found id: ""
	I0729 14:42:06.535627 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.535638 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:06.535650 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:06.535668 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:06.584641 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:06.584676 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:06.597702 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:06.597737 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:06.664499 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:06.664537 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:06.664555 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:06.744808 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:06.744845 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:06.793853 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:09.294853 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:08.172993 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:10.174863 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:08.707874 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:11.209387 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:09.286151 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:09.307822 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:09.307892 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:09.369334 1039759 cri.go:89] found id: ""
	I0729 14:42:09.369363 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.369373 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:09.369381 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:09.369458 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:09.402302 1039759 cri.go:89] found id: ""
	I0729 14:42:09.402334 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.402345 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:09.402353 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:09.402423 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:09.436351 1039759 cri.go:89] found id: ""
	I0729 14:42:09.436380 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.436402 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:09.436429 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:09.436501 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:09.467735 1039759 cri.go:89] found id: ""
	I0729 14:42:09.467768 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.467780 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:09.467788 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:09.467849 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:09.503328 1039759 cri.go:89] found id: ""
	I0729 14:42:09.503355 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.503367 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:09.503376 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:09.503438 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:09.540012 1039759 cri.go:89] found id: ""
	I0729 14:42:09.540039 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.540047 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:09.540053 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:09.540106 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:09.576737 1039759 cri.go:89] found id: ""
	I0729 14:42:09.576801 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.576814 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:09.576822 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:09.576920 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:09.614624 1039759 cri.go:89] found id: ""
	I0729 14:42:09.614651 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.614659 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:09.614669 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:09.614684 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:09.650533 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:09.650580 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:09.709144 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:09.709175 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:09.724147 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:09.724173 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:09.790737 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:09.790760 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:09.790775 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:12.376968 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:12.390344 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:12.390409 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:12.424820 1039759 cri.go:89] found id: ""
	I0729 14:42:12.424849 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.424860 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:12.424876 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:12.424943 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:12.457444 1039759 cri.go:89] found id: ""
	I0729 14:42:12.457480 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.457492 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:12.457500 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:12.457561 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:12.490027 1039759 cri.go:89] found id: ""
	I0729 14:42:12.490058 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.490069 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:12.490077 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:12.490145 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:12.523229 1039759 cri.go:89] found id: ""
	I0729 14:42:12.523256 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.523265 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:12.523270 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:12.523321 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:12.557849 1039759 cri.go:89] found id: ""
	I0729 14:42:12.557875 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.557885 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:12.557891 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:12.557951 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:12.592943 1039759 cri.go:89] found id: ""
	I0729 14:42:12.592973 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.592982 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:12.592989 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:12.593059 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:12.626495 1039759 cri.go:89] found id: ""
	I0729 14:42:12.626531 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.626539 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:12.626557 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:12.626641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:12.663764 1039759 cri.go:89] found id: ""
	I0729 14:42:12.663793 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.663805 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:12.663818 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:12.663835 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:12.722521 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:12.722556 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:12.736476 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:12.736505 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:12.809582 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:12.809617 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:12.809637 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:12.890665 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:12.890712 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:11.793144 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:13.793447 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:15.794630 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:12.673257 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:15.173702 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:13.707929 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:15.707964 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:15.429702 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:15.443258 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:15.443340 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:15.477170 1039759 cri.go:89] found id: ""
	I0729 14:42:15.477198 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.477207 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:15.477212 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:15.477266 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:15.511614 1039759 cri.go:89] found id: ""
	I0729 14:42:15.511652 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.511665 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:15.511671 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:15.511739 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:15.548472 1039759 cri.go:89] found id: ""
	I0729 14:42:15.548501 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.548511 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:15.548519 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:15.548590 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:15.589060 1039759 cri.go:89] found id: ""
	I0729 14:42:15.589090 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.589102 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:15.589110 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:15.589185 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:15.622846 1039759 cri.go:89] found id: ""
	I0729 14:42:15.622873 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.622882 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:15.622887 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:15.622943 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:15.656193 1039759 cri.go:89] found id: ""
	I0729 14:42:15.656220 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.656229 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:15.656237 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:15.656307 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:15.691301 1039759 cri.go:89] found id: ""
	I0729 14:42:15.691336 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.691348 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:15.691357 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:15.691420 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:15.729923 1039759 cri.go:89] found id: ""
	I0729 14:42:15.729963 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.729974 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:15.729988 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:15.730004 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:15.783531 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:15.783569 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:15.799590 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:15.799619 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:15.874849 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:15.874886 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:15.874901 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:15.957384 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:15.957424 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:18.497035 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:18.511538 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:18.511616 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:18.550512 1039759 cri.go:89] found id: ""
	I0729 14:42:18.550552 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.550573 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:18.550582 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:18.550642 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:18.585910 1039759 cri.go:89] found id: ""
	I0729 14:42:18.585942 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.585954 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:18.585962 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:18.586031 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:18.619680 1039759 cri.go:89] found id: ""
	I0729 14:42:18.619712 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.619722 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:18.619730 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:18.619799 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:18.651559 1039759 cri.go:89] found id: ""
	I0729 14:42:18.651592 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.651604 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:18.651613 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:18.651688 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:18.686668 1039759 cri.go:89] found id: ""
	I0729 14:42:18.686693 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.686701 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:18.686711 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:18.686764 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:18.722832 1039759 cri.go:89] found id: ""
	I0729 14:42:18.722859 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.722869 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:18.722876 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:18.722927 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:18.758261 1039759 cri.go:89] found id: ""
	I0729 14:42:18.758289 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.758302 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:18.758310 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:18.758378 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:18.795190 1039759 cri.go:89] found id: ""
	I0729 14:42:18.795216 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.795227 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:18.795237 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:18.795251 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:18.835331 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:18.835366 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:17.796916 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:20.294082 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:17.673000 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:19.674010 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:18.209178 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:20.707421 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:18.889707 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:18.889745 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:18.902477 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:18.902503 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:18.970712 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:18.970735 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:18.970748 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:21.552092 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:21.566581 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:21.566669 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:21.600230 1039759 cri.go:89] found id: ""
	I0729 14:42:21.600261 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.600275 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:21.600283 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:21.600346 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:21.636576 1039759 cri.go:89] found id: ""
	I0729 14:42:21.636616 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.636627 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:21.636635 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:21.636705 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:21.672944 1039759 cri.go:89] found id: ""
	I0729 14:42:21.672973 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.672984 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:21.672997 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:21.673063 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:21.708555 1039759 cri.go:89] found id: ""
	I0729 14:42:21.708582 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.708601 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:21.708613 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:21.708673 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:21.744862 1039759 cri.go:89] found id: ""
	I0729 14:42:21.744891 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.744902 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:21.744908 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:21.744973 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:21.779084 1039759 cri.go:89] found id: ""
	I0729 14:42:21.779111 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.779119 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:21.779126 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:21.779183 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:21.819931 1039759 cri.go:89] found id: ""
	I0729 14:42:21.819972 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.819981 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:21.819989 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:21.820047 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:21.855472 1039759 cri.go:89] found id: ""
	I0729 14:42:21.855500 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.855509 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:21.855522 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:21.855539 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:21.925561 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:21.925579 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:21.925596 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:22.015986 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:22.016032 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:22.059898 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:22.059935 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:22.129018 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:22.129055 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:21.787886 1039263 pod_ready.go:81] duration metric: took 4m0.000465481s for pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace to be "Ready" ...
	E0729 14:42:21.787929 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0729 14:42:21.787945 1039263 pod_ready.go:38] duration metric: took 4m5.237036546s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:42:21.787973 1039263 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:42:21.788025 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:21.788089 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:21.857594 1039263 cri.go:89] found id: "0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:21.857613 1039263 cri.go:89] found id: ""
	I0729 14:42:21.857620 1039263 logs.go:276] 1 containers: [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8]
	I0729 14:42:21.857674 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:21.862462 1039263 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:21.862523 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:21.903562 1039263 cri.go:89] found id: "759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:21.903594 1039263 cri.go:89] found id: ""
	I0729 14:42:21.903604 1039263 logs.go:276] 1 containers: [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1]
	I0729 14:42:21.903660 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:21.908232 1039263 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:21.908327 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:21.947632 1039263 cri.go:89] found id: "cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:21.947663 1039263 cri.go:89] found id: ""
	I0729 14:42:21.947674 1039263 logs.go:276] 1 containers: [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d]
	I0729 14:42:21.947737 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:21.952576 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:21.952649 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:21.995318 1039263 cri.go:89] found id: "ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:21.995343 1039263 cri.go:89] found id: ""
	I0729 14:42:21.995351 1039263 logs.go:276] 1 containers: [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40]
	I0729 14:42:21.995418 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.000352 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:22.000440 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:22.040544 1039263 cri.go:89] found id: "1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:22.040572 1039263 cri.go:89] found id: ""
	I0729 14:42:22.040582 1039263 logs.go:276] 1 containers: [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b]
	I0729 14:42:22.040648 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.044840 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:22.044910 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:22.090787 1039263 cri.go:89] found id: "d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:22.090816 1039263 cri.go:89] found id: ""
	I0729 14:42:22.090827 1039263 logs.go:276] 1 containers: [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322]
	I0729 14:42:22.090897 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.096748 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:22.096826 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:22.143491 1039263 cri.go:89] found id: ""
	I0729 14:42:22.143522 1039263 logs.go:276] 0 containers: []
	W0729 14:42:22.143534 1039263 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:22.143541 1039263 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 14:42:22.143609 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 14:42:22.179378 1039263 cri.go:89] found id: "bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:22.179404 1039263 cri.go:89] found id: "40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:22.179409 1039263 cri.go:89] found id: ""
	I0729 14:42:22.179419 1039263 logs.go:276] 2 containers: [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4]
	I0729 14:42:22.179482 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.184686 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.189009 1039263 logs.go:123] Gathering logs for etcd [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1] ...
	I0729 14:42:22.189029 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:22.250475 1039263 logs.go:123] Gathering logs for kube-scheduler [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40] ...
	I0729 14:42:22.250510 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:22.286581 1039263 logs.go:123] Gathering logs for kube-proxy [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b] ...
	I0729 14:42:22.286622 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:22.325541 1039263 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:22.325570 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:22.831822 1039263 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:22.831875 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:22.846540 1039263 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:22.846588 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 14:42:22.970758 1039263 logs.go:123] Gathering logs for coredns [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d] ...
	I0729 14:42:22.970796 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:23.013428 1039263 logs.go:123] Gathering logs for kube-controller-manager [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322] ...
	I0729 14:42:23.013467 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:23.064784 1039263 logs.go:123] Gathering logs for storage-provisioner [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a] ...
	I0729 14:42:23.064820 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:23.111615 1039263 logs.go:123] Gathering logs for storage-provisioner [40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4] ...
	I0729 14:42:23.111653 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:23.151296 1039263 logs.go:123] Gathering logs for container status ...
	I0729 14:42:23.151328 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:23.198650 1039263 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:23.198692 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:23.259196 1039263 logs.go:123] Gathering logs for kube-apiserver [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8] ...
	I0729 14:42:23.259247 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:25.808980 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:25.829180 1039263 api_server.go:72] duration metric: took 4m16.997740137s to wait for apiserver process to appear ...
	I0729 14:42:25.829211 1039263 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:42:25.829260 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:25.829335 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:25.875138 1039263 cri.go:89] found id: "0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:25.875167 1039263 cri.go:89] found id: ""
	I0729 14:42:25.875175 1039263 logs.go:276] 1 containers: [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8]
	I0729 14:42:25.875230 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:25.879855 1039263 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:25.879937 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:25.916938 1039263 cri.go:89] found id: "759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:25.916964 1039263 cri.go:89] found id: ""
	I0729 14:42:25.916974 1039263 logs.go:276] 1 containers: [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1]
	I0729 14:42:25.917036 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:25.921166 1039263 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:25.921224 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:25.958196 1039263 cri.go:89] found id: "cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:25.958224 1039263 cri.go:89] found id: ""
	I0729 14:42:25.958234 1039263 logs.go:276] 1 containers: [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d]
	I0729 14:42:25.958300 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:25.962697 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:25.962760 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:26.000162 1039263 cri.go:89] found id: "ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:26.000195 1039263 cri.go:89] found id: ""
	I0729 14:42:26.000206 1039263 logs.go:276] 1 containers: [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40]
	I0729 14:42:26.000277 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.004518 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:26.004594 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:26.041099 1039263 cri.go:89] found id: "1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:26.041133 1039263 cri.go:89] found id: ""
	I0729 14:42:26.041144 1039263 logs.go:276] 1 containers: [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b]
	I0729 14:42:26.041208 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.045334 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:26.045412 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:26.082783 1039263 cri.go:89] found id: "d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:26.082815 1039263 cri.go:89] found id: ""
	I0729 14:42:26.082826 1039263 logs.go:276] 1 containers: [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322]
	I0729 14:42:26.082901 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.086996 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:26.087063 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:26.123636 1039263 cri.go:89] found id: ""
	I0729 14:42:26.123677 1039263 logs.go:276] 0 containers: []
	W0729 14:42:26.123688 1039263 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:26.123694 1039263 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 14:42:26.123756 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 14:42:26.163819 1039263 cri.go:89] found id: "bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:26.163849 1039263 cri.go:89] found id: "40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:26.163855 1039263 cri.go:89] found id: ""
	I0729 14:42:26.163864 1039263 logs.go:276] 2 containers: [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4]
	I0729 14:42:26.163929 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.168611 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.173125 1039263 logs.go:123] Gathering logs for kube-scheduler [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40] ...
	I0729 14:42:26.173155 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:22.173593 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:24.173621 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:22.708101 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:25.206661 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:27.207926 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:24.645474 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:24.658107 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:24.658171 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:24.696604 1039759 cri.go:89] found id: ""
	I0729 14:42:24.696635 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.696645 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:24.696653 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:24.696725 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:24.733862 1039759 cri.go:89] found id: ""
	I0729 14:42:24.733887 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.733894 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:24.733901 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:24.733957 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:24.770614 1039759 cri.go:89] found id: ""
	I0729 14:42:24.770644 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.770656 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:24.770664 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:24.770734 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:24.806368 1039759 cri.go:89] found id: ""
	I0729 14:42:24.806394 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.806403 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:24.806408 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:24.806470 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:24.838490 1039759 cri.go:89] found id: ""
	I0729 14:42:24.838526 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.838534 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:24.838541 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:24.838596 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:24.871017 1039759 cri.go:89] found id: ""
	I0729 14:42:24.871043 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.871051 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:24.871057 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:24.871128 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:24.903281 1039759 cri.go:89] found id: ""
	I0729 14:42:24.903311 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.903322 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:24.903330 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:24.903403 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:24.937245 1039759 cri.go:89] found id: ""
	I0729 14:42:24.937279 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.937291 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:24.937304 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:24.937319 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:24.989518 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:24.989551 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:25.005021 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:25.005055 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:25.080849 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:25.080877 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:25.080893 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:25.163742 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:25.163784 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:27.706182 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:27.719350 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:27.719425 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:27.756955 1039759 cri.go:89] found id: ""
	I0729 14:42:27.756982 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.756990 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:27.756997 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:27.757054 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:27.791975 1039759 cri.go:89] found id: ""
	I0729 14:42:27.792014 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.792025 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:27.792033 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:27.792095 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:27.834188 1039759 cri.go:89] found id: ""
	I0729 14:42:27.834215 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.834223 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:27.834230 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:27.834296 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:27.867798 1039759 cri.go:89] found id: ""
	I0729 14:42:27.867834 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.867843 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:27.867851 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:27.867918 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:27.900316 1039759 cri.go:89] found id: ""
	I0729 14:42:27.900343 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.900351 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:27.900357 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:27.900422 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:27.932361 1039759 cri.go:89] found id: ""
	I0729 14:42:27.932391 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.932402 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:27.932425 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:27.932493 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:27.965530 1039759 cri.go:89] found id: ""
	I0729 14:42:27.965562 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.965573 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:27.965581 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:27.965651 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:27.999582 1039759 cri.go:89] found id: ""
	I0729 14:42:27.999608 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.999617 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:27.999626 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:27.999654 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:28.069415 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:28.069438 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:28.069454 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:28.149781 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:28.149821 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:28.190045 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:28.190072 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:28.244147 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:28.244188 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:26.217755 1039263 logs.go:123] Gathering logs for storage-provisioner [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a] ...
	I0729 14:42:26.217796 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:26.257363 1039263 logs.go:123] Gathering logs for storage-provisioner [40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4] ...
	I0729 14:42:26.257399 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:26.297502 1039263 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:26.297534 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:26.729336 1039263 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:26.729370 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:26.779172 1039263 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:26.779213 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:26.794369 1039263 logs.go:123] Gathering logs for etcd [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1] ...
	I0729 14:42:26.794399 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:26.857964 1039263 logs.go:123] Gathering logs for coredns [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d] ...
	I0729 14:42:26.858000 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:26.895052 1039263 logs.go:123] Gathering logs for container status ...
	I0729 14:42:26.895083 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:26.936360 1039263 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:26.936395 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 14:42:27.037118 1039263 logs.go:123] Gathering logs for kube-apiserver [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8] ...
	I0729 14:42:27.037160 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:27.089764 1039263 logs.go:123] Gathering logs for kube-proxy [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b] ...
	I0729 14:42:27.089798 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:27.134009 1039263 logs.go:123] Gathering logs for kube-controller-manager [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322] ...
	I0729 14:42:27.134042 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:29.690960 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:42:29.696457 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 200:
	ok
	I0729 14:42:29.697313 1039263 api_server.go:141] control plane version: v1.30.3
	I0729 14:42:29.697335 1039263 api_server.go:131] duration metric: took 3.868117139s to wait for apiserver health ...
	I0729 14:42:29.697343 1039263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:42:29.697370 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:29.697430 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:29.740594 1039263 cri.go:89] found id: "0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:29.740623 1039263 cri.go:89] found id: ""
	I0729 14:42:29.740633 1039263 logs.go:276] 1 containers: [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8]
	I0729 14:42:29.740696 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.745183 1039263 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:29.745257 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:29.780091 1039263 cri.go:89] found id: "759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:29.780112 1039263 cri.go:89] found id: ""
	I0729 14:42:29.780119 1039263 logs.go:276] 1 containers: [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1]
	I0729 14:42:29.780178 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.784241 1039263 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:29.784305 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:29.825618 1039263 cri.go:89] found id: "cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:29.825641 1039263 cri.go:89] found id: ""
	I0729 14:42:29.825649 1039263 logs.go:276] 1 containers: [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d]
	I0729 14:42:29.825715 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.830291 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:29.830351 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:29.866651 1039263 cri.go:89] found id: "ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:29.866685 1039263 cri.go:89] found id: ""
	I0729 14:42:29.866695 1039263 logs.go:276] 1 containers: [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40]
	I0729 14:42:29.866758 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.871440 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:29.871494 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:29.911944 1039263 cri.go:89] found id: "1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:29.911968 1039263 cri.go:89] found id: ""
	I0729 14:42:29.911976 1039263 logs.go:276] 1 containers: [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b]
	I0729 14:42:29.912037 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.916604 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:29.916680 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:29.954334 1039263 cri.go:89] found id: "d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:29.954361 1039263 cri.go:89] found id: ""
	I0729 14:42:29.954371 1039263 logs.go:276] 1 containers: [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322]
	I0729 14:42:29.954446 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.959051 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:29.959130 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:29.996760 1039263 cri.go:89] found id: ""
	I0729 14:42:29.996795 1039263 logs.go:276] 0 containers: []
	W0729 14:42:29.996804 1039263 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:29.996812 1039263 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 14:42:29.996883 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 14:42:30.034562 1039263 cri.go:89] found id: "bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:30.034598 1039263 cri.go:89] found id: "40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:30.034604 1039263 cri.go:89] found id: ""
	I0729 14:42:30.034614 1039263 logs.go:276] 2 containers: [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4]
	I0729 14:42:30.034682 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:30.039588 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:30.043866 1039263 logs.go:123] Gathering logs for kube-apiserver [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8] ...
	I0729 14:42:30.043889 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:30.091309 1039263 logs.go:123] Gathering logs for etcd [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1] ...
	I0729 14:42:30.091349 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:30.149888 1039263 logs.go:123] Gathering logs for kube-scheduler [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40] ...
	I0729 14:42:30.149926 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:30.189441 1039263 logs.go:123] Gathering logs for kube-controller-manager [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322] ...
	I0729 14:42:30.189479 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:30.250850 1039263 logs.go:123] Gathering logs for storage-provisioner [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a] ...
	I0729 14:42:30.250890 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:30.290077 1039263 logs.go:123] Gathering logs for storage-provisioner [40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4] ...
	I0729 14:42:30.290111 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:30.329035 1039263 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:30.329068 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:30.383068 1039263 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:30.383113 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 14:42:30.497009 1039263 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:30.497045 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:30.914489 1039263 logs.go:123] Gathering logs for container status ...
	I0729 14:42:30.914534 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:30.972901 1039263 logs.go:123] Gathering logs for kube-proxy [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b] ...
	I0729 14:42:30.972951 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:31.021798 1039263 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:31.021838 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:31.040147 1039263 logs.go:123] Gathering logs for coredns [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d] ...
	I0729 14:42:31.040182 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:26.674294 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:29.173375 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:31.173588 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:29.710051 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:32.209382 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:33.593681 1039263 system_pods.go:59] 8 kube-system pods found
	I0729 14:42:33.593711 1039263 system_pods.go:61] "coredns-7db6d8ff4d-6dhzz" [c680e565-fe93-4072-8fe8-6fd440ae5675] Running
	I0729 14:42:33.593716 1039263 system_pods.go:61] "etcd-embed-certs-668123" [3244d6a8-3aa2-406a-86fe-9770f5b8541a] Running
	I0729 14:42:33.593719 1039263 system_pods.go:61] "kube-apiserver-embed-certs-668123" [a00570e4-b496-4083-b280-4125643e475e] Running
	I0729 14:42:33.593723 1039263 system_pods.go:61] "kube-controller-manager-embed-certs-668123" [cec685e1-4d5f-4210-a115-e3766c962f07] Running
	I0729 14:42:33.593725 1039263 system_pods.go:61] "kube-proxy-2v79q" [e43e850d-b94e-467c-bf0f-0eac3828f54f] Running
	I0729 14:42:33.593728 1039263 system_pods.go:61] "kube-scheduler-embed-certs-668123" [4037d948-faed-49c9-b321-6a4be51b9ea9] Running
	I0729 14:42:33.593733 1039263 system_pods.go:61] "metrics-server-569cc877fc-5msnp" [eb9cd6f7-caf5-4b18-b0d6-0f01add839ce] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:42:33.593736 1039263 system_pods.go:61] "storage-provisioner" [ecdab0df-406c-4f3c-b8fe-34a48b7f1e0a] Running
	I0729 14:42:33.593744 1039263 system_pods.go:74] duration metric: took 3.896394577s to wait for pod list to return data ...
	I0729 14:42:33.593751 1039263 default_sa.go:34] waiting for default service account to be created ...
	I0729 14:42:33.596176 1039263 default_sa.go:45] found service account: "default"
	I0729 14:42:33.596197 1039263 default_sa.go:55] duration metric: took 2.440561ms for default service account to be created ...
	I0729 14:42:33.596205 1039263 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 14:42:33.601830 1039263 system_pods.go:86] 8 kube-system pods found
	I0729 14:42:33.601855 1039263 system_pods.go:89] "coredns-7db6d8ff4d-6dhzz" [c680e565-fe93-4072-8fe8-6fd440ae5675] Running
	I0729 14:42:33.601861 1039263 system_pods.go:89] "etcd-embed-certs-668123" [3244d6a8-3aa2-406a-86fe-9770f5b8541a] Running
	I0729 14:42:33.601866 1039263 system_pods.go:89] "kube-apiserver-embed-certs-668123" [a00570e4-b496-4083-b280-4125643e475e] Running
	I0729 14:42:33.601871 1039263 system_pods.go:89] "kube-controller-manager-embed-certs-668123" [cec685e1-4d5f-4210-a115-e3766c962f07] Running
	I0729 14:42:33.601878 1039263 system_pods.go:89] "kube-proxy-2v79q" [e43e850d-b94e-467c-bf0f-0eac3828f54f] Running
	I0729 14:42:33.601887 1039263 system_pods.go:89] "kube-scheduler-embed-certs-668123" [4037d948-faed-49c9-b321-6a4be51b9ea9] Running
	I0729 14:42:33.601897 1039263 system_pods.go:89] "metrics-server-569cc877fc-5msnp" [eb9cd6f7-caf5-4b18-b0d6-0f01add839ce] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:42:33.601908 1039263 system_pods.go:89] "storage-provisioner" [ecdab0df-406c-4f3c-b8fe-34a48b7f1e0a] Running
	I0729 14:42:33.601921 1039263 system_pods.go:126] duration metric: took 5.70985ms to wait for k8s-apps to be running ...
	I0729 14:42:33.601934 1039263 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 14:42:33.601994 1039263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:42:33.620869 1039263 system_svc.go:56] duration metric: took 18.921974ms WaitForService to wait for kubelet
	I0729 14:42:33.620907 1039263 kubeadm.go:582] duration metric: took 4m24.7894747s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:42:33.620939 1039263 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:42:33.623517 1039263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:42:33.623538 1039263 node_conditions.go:123] node cpu capacity is 2
	I0729 14:42:33.623562 1039263 node_conditions.go:105] duration metric: took 2.617272ms to run NodePressure ...
	I0729 14:42:33.623582 1039263 start.go:241] waiting for startup goroutines ...
	I0729 14:42:33.623591 1039263 start.go:246] waiting for cluster config update ...
	I0729 14:42:33.623601 1039263 start.go:255] writing updated cluster config ...
	I0729 14:42:33.623897 1039263 ssh_runner.go:195] Run: rm -f paused
	I0729 14:42:33.677961 1039263 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 14:42:33.679952 1039263 out.go:177] * Done! kubectl is now configured to use "embed-certs-668123" cluster and "default" namespace by default
	I0729 14:42:30.758335 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:30.771788 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:30.771860 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:30.807608 1039759 cri.go:89] found id: ""
	I0729 14:42:30.807633 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.807641 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:30.807647 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:30.807709 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:30.842361 1039759 cri.go:89] found id: ""
	I0729 14:42:30.842389 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.842397 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:30.842404 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:30.842474 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:30.879123 1039759 cri.go:89] found id: ""
	I0729 14:42:30.879149 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.879157 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:30.879162 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:30.879228 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:30.913042 1039759 cri.go:89] found id: ""
	I0729 14:42:30.913072 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.913084 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:30.913092 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:30.913162 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:30.949867 1039759 cri.go:89] found id: ""
	I0729 14:42:30.949900 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.949910 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:30.949919 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:30.949988 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:30.997468 1039759 cri.go:89] found id: ""
	I0729 14:42:30.997497 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.997509 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:30.997516 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:30.997606 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:31.039611 1039759 cri.go:89] found id: ""
	I0729 14:42:31.039643 1039759 logs.go:276] 0 containers: []
	W0729 14:42:31.039654 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:31.039662 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:31.039730 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:31.085802 1039759 cri.go:89] found id: ""
	I0729 14:42:31.085839 1039759 logs.go:276] 0 containers: []
	W0729 14:42:31.085851 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:31.085862 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:31.085890 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:31.155919 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:31.155941 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:31.155954 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:31.232795 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:31.232833 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:31.270647 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:31.270682 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:31.324648 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:31.324685 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:33.839801 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:33.853358 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:33.853417 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:33.674345 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:36.174468 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:34.707752 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:37.209918 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:33.889294 1039759 cri.go:89] found id: ""
	I0729 14:42:33.889323 1039759 logs.go:276] 0 containers: []
	W0729 14:42:33.889334 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:33.889342 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:33.889413 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:33.930106 1039759 cri.go:89] found id: ""
	I0729 14:42:33.930130 1039759 logs.go:276] 0 containers: []
	W0729 14:42:33.930142 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:33.930149 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:33.930211 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:33.973607 1039759 cri.go:89] found id: ""
	I0729 14:42:33.973634 1039759 logs.go:276] 0 containers: []
	W0729 14:42:33.973646 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:33.973654 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:33.973715 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:34.010103 1039759 cri.go:89] found id: ""
	I0729 14:42:34.010133 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.010142 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:34.010149 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:34.010209 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:34.044050 1039759 cri.go:89] found id: ""
	I0729 14:42:34.044080 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.044092 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:34.044099 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:34.044174 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:34.081222 1039759 cri.go:89] found id: ""
	I0729 14:42:34.081250 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.081260 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:34.081268 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:34.081360 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:34.115837 1039759 cri.go:89] found id: ""
	I0729 14:42:34.115878 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.115891 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:34.115899 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:34.115973 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:34.151086 1039759 cri.go:89] found id: ""
	I0729 14:42:34.151116 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.151126 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:34.151139 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:34.151156 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:34.164058 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:34.164087 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:34.238481 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:34.238503 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:34.238518 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:34.316236 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:34.316279 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:34.356281 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:34.356316 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:36.910374 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:36.924907 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:36.925008 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:36.960508 1039759 cri.go:89] found id: ""
	I0729 14:42:36.960535 1039759 logs.go:276] 0 containers: []
	W0729 14:42:36.960543 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:36.960550 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:36.960631 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:36.999840 1039759 cri.go:89] found id: ""
	I0729 14:42:36.999869 1039759 logs.go:276] 0 containers: []
	W0729 14:42:36.999881 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:36.999889 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:36.999960 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:37.032801 1039759 cri.go:89] found id: ""
	I0729 14:42:37.032832 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.032840 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:37.032847 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:37.032907 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:37.066359 1039759 cri.go:89] found id: ""
	I0729 14:42:37.066386 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.066394 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:37.066401 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:37.066454 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:37.103816 1039759 cri.go:89] found id: ""
	I0729 14:42:37.103844 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.103852 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:37.103859 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:37.103922 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:37.137135 1039759 cri.go:89] found id: ""
	I0729 14:42:37.137175 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.137186 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:37.137194 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:37.137267 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:37.170819 1039759 cri.go:89] found id: ""
	I0729 14:42:37.170851 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.170863 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:37.170871 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:37.170941 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:37.206427 1039759 cri.go:89] found id: ""
	I0729 14:42:37.206456 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.206467 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:37.206478 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:37.206492 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:37.287119 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:37.287160 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:37.331090 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:37.331119 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:37.392147 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:37.392189 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:37.406017 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:37.406047 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:37.471644 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:38.673603 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:40.674214 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:39.706915 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:41.201453 1039440 pod_ready.go:81] duration metric: took 4m0.000454399s for pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace to be "Ready" ...
	E0729 14:42:41.201488 1039440 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 14:42:41.201514 1039440 pod_ready.go:38] duration metric: took 4m13.052610312s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:42:41.201553 1039440 kubeadm.go:597] duration metric: took 4m22.712976139s to restartPrimaryControlPlane
	W0729 14:42:41.201639 1039440 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 14:42:41.201696 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:42:39.972835 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:39.985878 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:39.985945 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:40.020312 1039759 cri.go:89] found id: ""
	I0729 14:42:40.020349 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.020360 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:40.020368 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:40.020456 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:40.055688 1039759 cri.go:89] found id: ""
	I0729 14:42:40.055721 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.055732 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:40.055740 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:40.055799 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:40.090432 1039759 cri.go:89] found id: ""
	I0729 14:42:40.090463 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.090472 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:40.090478 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:40.090549 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:40.127794 1039759 cri.go:89] found id: ""
	I0729 14:42:40.127823 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.127832 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:40.127838 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:40.127894 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:40.162911 1039759 cri.go:89] found id: ""
	I0729 14:42:40.162944 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.162953 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:40.162959 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:40.163020 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:40.201578 1039759 cri.go:89] found id: ""
	I0729 14:42:40.201608 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.201619 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:40.201625 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:40.201684 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:40.247314 1039759 cri.go:89] found id: ""
	I0729 14:42:40.247340 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.247348 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:40.247363 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:40.247436 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:40.285393 1039759 cri.go:89] found id: ""
	I0729 14:42:40.285422 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.285431 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:40.285440 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:40.285458 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:40.299901 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:40.299933 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:40.372774 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:40.372802 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:40.372821 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:40.454392 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:40.454447 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:40.494641 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:40.494671 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:43.046060 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:43.058790 1039759 kubeadm.go:597] duration metric: took 4m3.37086398s to restartPrimaryControlPlane
	W0729 14:42:43.058888 1039759 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 14:42:43.058920 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:42:43.544647 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:42:43.560304 1039759 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:42:43.570229 1039759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:42:43.579922 1039759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:42:43.579946 1039759 kubeadm.go:157] found existing configuration files:
	
	I0729 14:42:43.580004 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:42:43.589520 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:42:43.589591 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:42:43.600286 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:42:43.611565 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:42:43.611629 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:42:43.623432 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:42:43.633289 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:42:43.633338 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:42:43.643410 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:42:43.653723 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:42:43.653816 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:42:43.663840 1039759 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:42:43.735243 1039759 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 14:42:43.735314 1039759 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:42:43.904148 1039759 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:42:43.904310 1039759 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:42:43.904480 1039759 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 14:42:44.101401 1039759 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:42:44.103392 1039759 out.go:204]   - Generating certificates and keys ...
	I0729 14:42:44.103499 1039759 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:42:44.103580 1039759 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:42:44.103693 1039759 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:42:44.103829 1039759 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:42:44.103944 1039759 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:42:44.104054 1039759 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:42:44.104146 1039759 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:42:44.104360 1039759 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:42:44.104599 1039759 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:42:44.105264 1039759 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:42:44.105363 1039759 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:42:44.105461 1039759 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:42:44.426107 1039759 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:42:44.593004 1039759 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:42:44.845387 1039759 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:42:44.934634 1039759 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:42:44.959808 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:42:44.961918 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:42:44.961990 1039759 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:42:45.117986 1039759 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:42:42.678218 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:45.175453 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:45.119775 1039759 out.go:204]   - Booting up control plane ...
	I0729 14:42:45.119913 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:42:45.121333 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:42:45.123001 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:42:45.123783 1039759 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:42:45.126031 1039759 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 14:42:47.673678 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:49.674212 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:52.173086 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:54.173797 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:56.178948 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:58.674432 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:00.675207 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:03.173621 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:05.175460 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:07.674421 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:09.674478 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:12.882329 1039440 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.680602745s)
	I0729 14:43:12.882426 1039440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:43:12.900267 1039440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:43:12.910750 1039440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:43:12.921172 1039440 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:43:12.921194 1039440 kubeadm.go:157] found existing configuration files:
	
	I0729 14:43:12.921244 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 14:43:12.931186 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:43:12.931243 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:43:12.940800 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 14:43:12.949875 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:43:12.949929 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:43:12.959555 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 14:43:12.968817 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:43:12.968871 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:43:12.978560 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 14:43:12.987657 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:43:12.987700 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:43:12.997142 1039440 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:43:13.057245 1039440 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 14:43:13.057405 1039440 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:43:13.205227 1039440 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:43:13.205381 1039440 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:43:13.205541 1039440 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 14:43:13.404885 1039440 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:43:13.407054 1039440 out.go:204]   - Generating certificates and keys ...
	I0729 14:43:13.407148 1039440 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:43:13.407232 1039440 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:43:13.407329 1039440 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:43:13.407411 1039440 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:43:13.407509 1039440 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:43:13.407598 1039440 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:43:13.407688 1039440 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:43:13.407774 1039440 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:43:13.407889 1039440 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:43:13.408006 1039440 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:43:13.408071 1039440 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:43:13.408177 1039440 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:43:13.563569 1039440 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:43:14.001138 1039440 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 14:43:14.091368 1039440 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:43:14.238732 1039440 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:43:14.344460 1039440 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:43:14.346386 1039440 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:43:14.349309 1039440 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:43:12.174022 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:14.673166 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:14.351183 1039440 out.go:204]   - Booting up control plane ...
	I0729 14:43:14.351293 1039440 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:43:14.351374 1039440 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:43:14.351671 1039440 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:43:14.375878 1039440 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:43:14.377114 1039440 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:43:14.377198 1039440 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:43:14.528561 1039440 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 14:43:14.528665 1039440 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 14:43:15.030447 1039440 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.044001ms
	I0729 14:43:15.030591 1039440 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 14:43:20.033357 1039440 kubeadm.go:310] [api-check] The API server is healthy after 5.002708747s
	I0729 14:43:20.055871 1039440 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 14:43:20.069020 1039440 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 14:43:20.108465 1039440 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 14:43:20.108664 1039440 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-751306 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 14:43:20.124596 1039440 kubeadm.go:310] [bootstrap-token] Using token: vqqt7g.hayxn6bly3sjo08s
	I0729 14:43:20.125995 1039440 out.go:204]   - Configuring RBAC rules ...
	I0729 14:43:20.126124 1039440 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 14:43:20.138826 1039440 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 14:43:20.145976 1039440 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 14:43:20.149166 1039440 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 14:43:20.152875 1039440 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 14:43:20.156268 1039440 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 14:43:20.446117 1039440 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 14:43:20.900251 1039440 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 14:43:21.446105 1039440 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 14:43:21.446920 1039440 kubeadm.go:310] 
	I0729 14:43:21.446984 1039440 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 14:43:21.446992 1039440 kubeadm.go:310] 
	I0729 14:43:21.447057 1039440 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 14:43:21.447063 1039440 kubeadm.go:310] 
	I0729 14:43:21.447084 1039440 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 14:43:21.447133 1039440 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 14:43:21.447176 1039440 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 14:43:21.447182 1039440 kubeadm.go:310] 
	I0729 14:43:21.447233 1039440 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 14:43:21.447242 1039440 kubeadm.go:310] 
	I0729 14:43:21.447310 1039440 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 14:43:21.447334 1039440 kubeadm.go:310] 
	I0729 14:43:21.447408 1039440 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 14:43:21.447515 1039440 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 14:43:21.447574 1039440 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 14:43:21.447582 1039440 kubeadm.go:310] 
	I0729 14:43:21.447652 1039440 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 14:43:21.447722 1039440 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 14:43:21.447728 1039440 kubeadm.go:310] 
	I0729 14:43:21.447799 1039440 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token vqqt7g.hayxn6bly3sjo08s \
	I0729 14:43:21.447903 1039440 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 \
	I0729 14:43:21.447931 1039440 kubeadm.go:310] 	--control-plane 
	I0729 14:43:21.447935 1039440 kubeadm.go:310] 
	I0729 14:43:21.448017 1039440 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 14:43:21.448025 1039440 kubeadm.go:310] 
	I0729 14:43:21.448115 1039440 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token vqqt7g.hayxn6bly3sjo08s \
	I0729 14:43:21.448239 1039440 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 
	I0729 14:43:21.449071 1039440 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:43:21.449117 1039440 cni.go:84] Creating CNI manager for ""
	I0729 14:43:21.449134 1039440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:43:21.450744 1039440 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:43:16.674887 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:19.175478 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:21.452012 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:43:21.464232 1039440 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:43:21.486786 1039440 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 14:43:21.486890 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:21.486887 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-751306 minikube.k8s.io/updated_at=2024_07_29T14_43_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411 minikube.k8s.io/name=default-k8s-diff-port-751306 minikube.k8s.io/primary=true
	I0729 14:43:21.689413 1039440 ops.go:34] apiserver oom_adj: -16
	I0729 14:43:21.697342 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:22.198351 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:21.673361 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:23.674189 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:26.173782 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:22.698043 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:23.198259 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:23.697640 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:24.198325 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:24.697702 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:25.198216 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:25.697625 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:26.197978 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:26.698039 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:27.197794 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:25.126835 1039759 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 14:43:25.127033 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:43:25.127306 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:43:28.174036 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:29.667306 1038758 pod_ready.go:81] duration metric: took 4m0.000473541s for pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace to be "Ready" ...
	E0729 14:43:29.667341 1038758 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 14:43:29.667369 1038758 pod_ready.go:38] duration metric: took 4m13.916299366s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:43:29.667407 1038758 kubeadm.go:597] duration metric: took 4m21.57875039s to restartPrimaryControlPlane
	W0729 14:43:29.667481 1038758 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 14:43:29.667513 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:43:27.698036 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:28.197941 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:28.697839 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:29.197525 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:29.698141 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:30.197670 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:30.697615 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:31.197999 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:31.697648 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:32.197647 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:30.127504 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:43:30.127777 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:43:32.697837 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:33.197692 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:33.697431 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:34.198048 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:34.698439 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:34.802320 1039440 kubeadm.go:1113] duration metric: took 13.31552277s to wait for elevateKubeSystemPrivileges
	I0729 14:43:34.802367 1039440 kubeadm.go:394] duration metric: took 5m16.369033556s to StartCluster
	I0729 14:43:34.802391 1039440 settings.go:142] acquiring lock: {Name:mke61e73d7bb1a5bd9c2f4c9e9bba0a07b199ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:43:34.802488 1039440 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:43:34.804740 1039440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:43:34.805049 1039440 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.233 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:43:34.805148 1039440 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 14:43:34.805251 1039440 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-751306"
	I0729 14:43:34.805262 1039440 config.go:182] Loaded profile config "default-k8s-diff-port-751306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:43:34.805269 1039440 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-751306"
	I0729 14:43:34.805313 1039440 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-751306"
	I0729 14:43:34.805294 1039440 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-751306"
	W0729 14:43:34.805341 1039440 addons.go:243] addon storage-provisioner should already be in state true
	I0729 14:43:34.805358 1039440 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-751306"
	W0729 14:43:34.805369 1039440 addons.go:243] addon metrics-server should already be in state true
	I0729 14:43:34.805396 1039440 host.go:66] Checking if "default-k8s-diff-port-751306" exists ...
	I0729 14:43:34.805325 1039440 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-751306"
	I0729 14:43:34.805396 1039440 host.go:66] Checking if "default-k8s-diff-port-751306" exists ...
	I0729 14:43:34.805838 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.805869 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.805904 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.805928 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.805968 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.806026 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.806625 1039440 out.go:177] * Verifying Kubernetes components...
	I0729 14:43:34.807999 1039440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:43:34.823091 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39133
	I0729 14:43:34.823103 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35809
	I0729 14:43:34.823532 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.823556 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.824084 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.824111 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.824372 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.824399 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.824427 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.824891 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.825049 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38325
	I0729 14:43:34.825140 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.825191 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.825210 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:43:34.825415 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.825927 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.825945 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.826314 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.826903 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.826939 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.829361 1039440 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-751306"
	W0729 14:43:34.829386 1039440 addons.go:243] addon default-storageclass should already be in state true
	I0729 14:43:34.829417 1039440 host.go:66] Checking if "default-k8s-diff-port-751306" exists ...
	I0729 14:43:34.829785 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.829832 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.841752 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44091
	I0729 14:43:34.842232 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.842938 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.842965 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.843370 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38151
	I0729 14:43:34.843397 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.843713 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:43:34.843818 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.844223 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.844247 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.844615 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.844805 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:43:34.846424 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:43:34.846619 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:43:34.848531 1039440 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 14:43:34.848918 1039440 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:43:34.849006 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35785
	I0729 14:43:34.849421 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.849852 1039440 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 14:43:34.849870 1039440 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 14:43:34.849888 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:43:34.850037 1039440 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:43:34.850053 1039440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 14:43:34.850069 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:43:34.850233 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.850251 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.850659 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.851665 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.851781 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.853937 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.854441 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.854518 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:43:34.854540 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.854589 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:43:34.854779 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:43:34.855035 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:43:34.855098 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:43:34.855114 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.855169 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:43:34.855465 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:43:34.855658 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:43:34.855828 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:43:34.856191 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:43:34.869648 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38917
	I0729 14:43:34.870131 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.870600 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.870618 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.871134 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.871334 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:43:34.873088 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:43:34.873340 1039440 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 14:43:34.873353 1039440 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 14:43:34.873369 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:43:34.876289 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.876751 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:43:34.876765 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.876952 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:43:34.877132 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:43:34.877267 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:43:34.877375 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:43:35.022897 1039440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:43:35.044537 1039440 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-751306" to be "Ready" ...
	I0729 14:43:35.057697 1039440 node_ready.go:49] node "default-k8s-diff-port-751306" has status "Ready":"True"
	I0729 14:43:35.057729 1039440 node_ready.go:38] duration metric: took 13.149458ms for node "default-k8s-diff-port-751306" to be "Ready" ...
	I0729 14:43:35.057744 1039440 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:43:35.073050 1039440 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7qhqh" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:35.150661 1039440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 14:43:35.170721 1039440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:43:35.228871 1039440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 14:43:35.228903 1039440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 14:43:35.276845 1039440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 14:43:35.276880 1039440 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 14:43:35.335623 1039440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:43:35.335656 1039440 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 14:43:35.407804 1039440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:43:35.446540 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.446567 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.446927 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:35.446959 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.446972 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:35.446985 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.446991 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.447286 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.447307 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:35.454199 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.454216 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.454476 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.454495 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:35.824592 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.824615 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.825058 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:35.825441 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.825525 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:35.825567 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.825576 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.827444 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.827454 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:35.827465 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:36.331175 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:36.331202 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:36.331575 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:36.331597 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:36.331607 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:36.331616 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:36.331623 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:36.331923 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:36.331961 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:36.331986 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:36.332003 1039440 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-751306"
	I0729 14:43:36.333995 1039440 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 14:43:36.335441 1039440 addons.go:510] duration metric: took 1.53029708s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 14:43:37.081992 1039440 pod_ready.go:92] pod "coredns-7db6d8ff4d-7qhqh" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.082019 1039440 pod_ready.go:81] duration metric: took 2.008931409s for pod "coredns-7db6d8ff4d-7qhqh" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.082031 1039440 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zxmwx" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.086173 1039440 pod_ready.go:92] pod "coredns-7db6d8ff4d-zxmwx" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.086194 1039440 pod_ready.go:81] duration metric: took 4.154163ms for pod "coredns-7db6d8ff4d-zxmwx" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.086203 1039440 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.090617 1039440 pod_ready.go:92] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.090636 1039440 pod_ready.go:81] duration metric: took 4.42625ms for pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.090647 1039440 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.094929 1039440 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.094950 1039440 pod_ready.go:81] duration metric: took 4.296245ms for pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.094962 1039440 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.099462 1039440 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.099483 1039440 pod_ready.go:81] duration metric: took 4.513354ms for pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.099495 1039440 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tqtjx" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.478252 1039440 pod_ready.go:92] pod "kube-proxy-tqtjx" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.478281 1039440 pod_ready.go:81] duration metric: took 378.778206ms for pod "kube-proxy-tqtjx" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.478295 1039440 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.878655 1039440 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.878678 1039440 pod_ready.go:81] duration metric: took 400.374407ms for pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.878686 1039440 pod_ready.go:38] duration metric: took 2.820929833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:43:37.878702 1039440 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:43:37.878752 1039440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:43:37.894699 1039440 api_server.go:72] duration metric: took 3.08960429s to wait for apiserver process to appear ...
	I0729 14:43:37.894730 1039440 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:43:37.894767 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:43:37.899710 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 200:
	ok
	I0729 14:43:37.900733 1039440 api_server.go:141] control plane version: v1.30.3
	I0729 14:43:37.900757 1039440 api_server.go:131] duration metric: took 6.019707ms to wait for apiserver health ...
	I0729 14:43:37.900765 1039440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:43:38.083157 1039440 system_pods.go:59] 9 kube-system pods found
	I0729 14:43:38.083197 1039440 system_pods.go:61] "coredns-7db6d8ff4d-7qhqh" [88941d43-c67d-4190-896c-edfc4c96b9a8] Running
	I0729 14:43:38.083204 1039440 system_pods.go:61] "coredns-7db6d8ff4d-zxmwx" [13b78c9b-97dc-4313-92d1-76fab481b276] Running
	I0729 14:43:38.083210 1039440 system_pods.go:61] "etcd-default-k8s-diff-port-751306" [11d5216e-a3e3-4ac8-9b00-1b1b04bb1c3e] Running
	I0729 14:43:38.083215 1039440 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-751306" [f9f539b1-374e-4214-b4ac-d6bcb60ca022] Running
	I0729 14:43:38.083221 1039440 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-751306" [07af9a19-2d14-4727-b7b0-ad2f297c1d1a] Running
	I0729 14:43:38.083226 1039440 system_pods.go:61] "kube-proxy-tqtjx" [bd100e13-d714-4ddb-ba43-44be43035b3f] Running
	I0729 14:43:38.083231 1039440 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-751306" [03603694-d75d-4073-8ce9-0ed9bbbe150a] Running
	I0729 14:43:38.083240 1039440 system_pods.go:61] "metrics-server-569cc877fc-z9wg5" [f022dfec-8e97-4679-a7dd-739c9231af82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:43:38.083246 1039440 system_pods.go:61] "storage-provisioner" [a8bf282a-27e8-43f9-a2ac-af6000a4decc] Running
	I0729 14:43:38.083255 1039440 system_pods.go:74] duration metric: took 182.484884ms to wait for pod list to return data ...
	I0729 14:43:38.083269 1039440 default_sa.go:34] waiting for default service account to be created ...
	I0729 14:43:38.277387 1039440 default_sa.go:45] found service account: "default"
	I0729 14:43:38.277418 1039440 default_sa.go:55] duration metric: took 194.142035ms for default service account to be created ...
	I0729 14:43:38.277429 1039440 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 14:43:38.481158 1039440 system_pods.go:86] 9 kube-system pods found
	I0729 14:43:38.481194 1039440 system_pods.go:89] "coredns-7db6d8ff4d-7qhqh" [88941d43-c67d-4190-896c-edfc4c96b9a8] Running
	I0729 14:43:38.481202 1039440 system_pods.go:89] "coredns-7db6d8ff4d-zxmwx" [13b78c9b-97dc-4313-92d1-76fab481b276] Running
	I0729 14:43:38.481210 1039440 system_pods.go:89] "etcd-default-k8s-diff-port-751306" [11d5216e-a3e3-4ac8-9b00-1b1b04bb1c3e] Running
	I0729 14:43:38.481217 1039440 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-751306" [f9f539b1-374e-4214-b4ac-d6bcb60ca022] Running
	I0729 14:43:38.481225 1039440 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-751306" [07af9a19-2d14-4727-b7b0-ad2f297c1d1a] Running
	I0729 14:43:38.481230 1039440 system_pods.go:89] "kube-proxy-tqtjx" [bd100e13-d714-4ddb-ba43-44be43035b3f] Running
	I0729 14:43:38.481236 1039440 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-751306" [03603694-d75d-4073-8ce9-0ed9bbbe150a] Running
	I0729 14:43:38.481248 1039440 system_pods.go:89] "metrics-server-569cc877fc-z9wg5" [f022dfec-8e97-4679-a7dd-739c9231af82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:43:38.481255 1039440 system_pods.go:89] "storage-provisioner" [a8bf282a-27e8-43f9-a2ac-af6000a4decc] Running
	I0729 14:43:38.481267 1039440 system_pods.go:126] duration metric: took 203.830126ms to wait for k8s-apps to be running ...
	I0729 14:43:38.481280 1039440 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 14:43:38.481329 1039440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:43:38.496175 1039440 system_svc.go:56] duration metric: took 14.88714ms WaitForService to wait for kubelet
	I0729 14:43:38.496209 1039440 kubeadm.go:582] duration metric: took 3.691120463s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:43:38.496237 1039440 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:43:38.677820 1039440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:43:38.677847 1039440 node_conditions.go:123] node cpu capacity is 2
	I0729 14:43:38.677859 1039440 node_conditions.go:105] duration metric: took 181.616437ms to run NodePressure ...
	I0729 14:43:38.677874 1039440 start.go:241] waiting for startup goroutines ...
	I0729 14:43:38.677882 1039440 start.go:246] waiting for cluster config update ...
	I0729 14:43:38.677894 1039440 start.go:255] writing updated cluster config ...
	I0729 14:43:38.678166 1039440 ssh_runner.go:195] Run: rm -f paused
	I0729 14:43:38.728616 1039440 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 14:43:38.730494 1039440 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-751306" cluster and "default" namespace by default
	I0729 14:43:40.128244 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:43:40.128447 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:43:55.945251 1038758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.277690166s)
	I0729 14:43:55.945335 1038758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:43:55.960870 1038758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:43:55.971175 1038758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:43:55.981424 1038758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:43:55.981456 1038758 kubeadm.go:157] found existing configuration files:
	
	I0729 14:43:55.981512 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:43:55.992098 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:43:55.992165 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:43:56.002242 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:43:56.011416 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:43:56.011486 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:43:56.020848 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:43:56.030219 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:43:56.030280 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:43:56.039957 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:43:56.049607 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:43:56.049670 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:43:56.059413 1038758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:43:56.109453 1038758 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 14:43:56.109563 1038758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:43:56.230876 1038758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:43:56.231018 1038758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:43:56.231126 1038758 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 14:43:56.244355 1038758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:43:56.246461 1038758 out.go:204]   - Generating certificates and keys ...
	I0729 14:43:56.246573 1038758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:43:56.246666 1038758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:43:56.246755 1038758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:43:56.246843 1038758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:43:56.246964 1038758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:43:56.247169 1038758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:43:56.247267 1038758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:43:56.247365 1038758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:43:56.247473 1038758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:43:56.247588 1038758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:43:56.247646 1038758 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:43:56.247718 1038758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:43:56.593641 1038758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:43:56.714510 1038758 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 14:43:56.862780 1038758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:43:57.010367 1038758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:43:57.108324 1038758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:43:57.109028 1038758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:43:57.111425 1038758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:43:57.113088 1038758 out.go:204]   - Booting up control plane ...
	I0729 14:43:57.113217 1038758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:43:57.113336 1038758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:43:57.113501 1038758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:43:57.135168 1038758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:43:57.141915 1038758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:43:57.142022 1038758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:43:57.269947 1038758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 14:43:57.270056 1038758 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 14:43:57.772110 1038758 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.03343ms
	I0729 14:43:57.772229 1038758 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 14:44:02.773898 1038758 kubeadm.go:310] [api-check] The API server is healthy after 5.00168383s
	I0729 14:44:02.788629 1038758 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 14:44:02.805813 1038758 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 14:44:02.831687 1038758 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 14:44:02.831963 1038758 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-603534 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 14:44:02.842427 1038758 kubeadm.go:310] [bootstrap-token] Using token: hg3j3v.551bb9ju0g9ic9e6
	I0729 14:44:00.129004 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:44:00.129267 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:44:02.844018 1038758 out.go:204]   - Configuring RBAC rules ...
	I0729 14:44:02.844160 1038758 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 14:44:02.851693 1038758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 14:44:02.859496 1038758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 14:44:02.863556 1038758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 14:44:02.866896 1038758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 14:44:02.871375 1038758 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 14:44:03.181687 1038758 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 14:44:03.618445 1038758 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 14:44:04.184562 1038758 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 14:44:04.185548 1038758 kubeadm.go:310] 
	I0729 14:44:04.185655 1038758 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 14:44:04.185689 1038758 kubeadm.go:310] 
	I0729 14:44:04.185788 1038758 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 14:44:04.185801 1038758 kubeadm.go:310] 
	I0729 14:44:04.185825 1038758 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 14:44:04.185906 1038758 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 14:44:04.185983 1038758 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 14:44:04.185992 1038758 kubeadm.go:310] 
	I0729 14:44:04.186079 1038758 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 14:44:04.186090 1038758 kubeadm.go:310] 
	I0729 14:44:04.186155 1038758 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 14:44:04.186165 1038758 kubeadm.go:310] 
	I0729 14:44:04.186231 1038758 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 14:44:04.186337 1038758 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 14:44:04.186431 1038758 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 14:44:04.186441 1038758 kubeadm.go:310] 
	I0729 14:44:04.186575 1038758 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 14:44:04.186679 1038758 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 14:44:04.186689 1038758 kubeadm.go:310] 
	I0729 14:44:04.186810 1038758 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hg3j3v.551bb9ju0g9ic9e6 \
	I0729 14:44:04.186944 1038758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 \
	I0729 14:44:04.186974 1038758 kubeadm.go:310] 	--control-plane 
	I0729 14:44:04.186984 1038758 kubeadm.go:310] 
	I0729 14:44:04.187102 1038758 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 14:44:04.187111 1038758 kubeadm.go:310] 
	I0729 14:44:04.187224 1038758 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hg3j3v.551bb9ju0g9ic9e6 \
	I0729 14:44:04.187375 1038758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 
	I0729 14:44:04.188377 1038758 kubeadm.go:310] W0729 14:43:56.090027    2906 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 14:44:04.188711 1038758 kubeadm.go:310] W0729 14:43:56.090887    2906 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 14:44:04.188834 1038758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:44:04.188852 1038758 cni.go:84] Creating CNI manager for ""
	I0729 14:44:04.188863 1038758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:44:04.190535 1038758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:44:04.191948 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:44:04.203414 1038758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:44:04.223025 1038758 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 14:44:04.223114 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:04.223132 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-603534 minikube.k8s.io/updated_at=2024_07_29T14_44_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411 minikube.k8s.io/name=no-preload-603534 minikube.k8s.io/primary=true
	I0729 14:44:04.240353 1038758 ops.go:34] apiserver oom_adj: -16
	I0729 14:44:04.442077 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:04.942458 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:05.442843 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:05.942138 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:06.442232 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:06.942611 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:07.442939 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:07.942661 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:08.443044 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:08.522590 1038758 kubeadm.go:1113] duration metric: took 4.299548803s to wait for elevateKubeSystemPrivileges
	I0729 14:44:08.522633 1038758 kubeadm.go:394] duration metric: took 5m0.491164642s to StartCluster
	I0729 14:44:08.522657 1038758 settings.go:142] acquiring lock: {Name:mke61e73d7bb1a5bd9c2f4c9e9bba0a07b199ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:44:08.522755 1038758 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:44:08.524573 1038758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:44:08.524893 1038758 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:44:08.524999 1038758 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 14:44:08.525112 1038758 addons.go:69] Setting storage-provisioner=true in profile "no-preload-603534"
	I0729 14:44:08.525150 1038758 addons.go:234] Setting addon storage-provisioner=true in "no-preload-603534"
	I0729 14:44:08.525146 1038758 addons.go:69] Setting default-storageclass=true in profile "no-preload-603534"
	I0729 14:44:08.525155 1038758 config.go:182] Loaded profile config "no-preload-603534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 14:44:08.525167 1038758 addons.go:69] Setting metrics-server=true in profile "no-preload-603534"
	I0729 14:44:08.525182 1038758 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-603534"
	W0729 14:44:08.525162 1038758 addons.go:243] addon storage-provisioner should already be in state true
	I0729 14:44:08.525229 1038758 host.go:66] Checking if "no-preload-603534" exists ...
	I0729 14:44:08.525185 1038758 addons.go:234] Setting addon metrics-server=true in "no-preload-603534"
	W0729 14:44:08.525264 1038758 addons.go:243] addon metrics-server should already be in state true
	I0729 14:44:08.525294 1038758 host.go:66] Checking if "no-preload-603534" exists ...
	I0729 14:44:08.525510 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.525553 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.525652 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.525668 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.525688 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.525715 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.526581 1038758 out.go:177] * Verifying Kubernetes components...
	I0729 14:44:08.527919 1038758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:44:08.541874 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43521
	I0729 14:44:08.542126 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34697
	I0729 14:44:08.542251 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35699
	I0729 14:44:08.542397 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.542505 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.542664 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.542948 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.542969 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.543075 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.543090 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.543115 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.543127 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.543323 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.543546 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.543551 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.543758 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.543779 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.544014 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.544035 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.544149 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:44:08.548026 1038758 addons.go:234] Setting addon default-storageclass=true in "no-preload-603534"
	W0729 14:44:08.548048 1038758 addons.go:243] addon default-storageclass should already be in state true
	I0729 14:44:08.548079 1038758 host.go:66] Checking if "no-preload-603534" exists ...
	I0729 14:44:08.548457 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.548478 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.559699 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36211
	I0729 14:44:08.560297 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.560916 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.560953 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.561332 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.561519 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:44:08.563422 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:44:08.564073 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42429
	I0729 14:44:08.564524 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.565011 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.565038 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.565427 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.565592 1038758 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 14:44:08.565752 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:44:08.566901 1038758 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 14:44:08.566921 1038758 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 14:44:08.566941 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:44:08.567688 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:44:08.568067 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34485
	I0729 14:44:08.568443 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.569019 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.569040 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.569462 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.569583 1038758 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:44:08.570038 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.570074 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.571187 1038758 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:44:08.571204 1038758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 14:44:08.571223 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:44:08.571595 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.572203 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:44:08.572247 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.572506 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:44:08.572704 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:44:08.572893 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:44:08.573100 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:44:08.574551 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.574900 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:44:08.574919 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.575074 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:44:08.575286 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:44:08.575427 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:44:08.575551 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:44:08.585902 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40045
	I0729 14:44:08.586319 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.586778 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.586803 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.587135 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.587357 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:44:08.588606 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:44:08.588827 1038758 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 14:44:08.588844 1038758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 14:44:08.588861 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:44:08.591169 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.591434 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:44:08.591466 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.591600 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:44:08.591766 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:44:08.591873 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:44:08.592103 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:44:08.752015 1038758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:44:08.775498 1038758 node_ready.go:35] waiting up to 6m0s for node "no-preload-603534" to be "Ready" ...
	I0729 14:44:08.788547 1038758 node_ready.go:49] node "no-preload-603534" has status "Ready":"True"
	I0729 14:44:08.788572 1038758 node_ready.go:38] duration metric: took 13.040411ms for node "no-preload-603534" to be "Ready" ...
	I0729 14:44:08.788582 1038758 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:44:08.793475 1038758 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:08.861468 1038758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:44:08.869542 1038758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 14:44:08.869567 1038758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 14:44:08.898398 1038758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 14:44:08.911120 1038758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 14:44:08.911148 1038758 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 14:44:08.931151 1038758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:44:08.931179 1038758 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 14:44:08.976093 1038758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:44:09.449857 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.449885 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.449863 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.449958 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.450343 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.450354 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.450361 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.450373 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.450374 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.450389 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.450442 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.450455 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.450476 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.450487 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.450620 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.450635 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.450637 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.450779 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.450799 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.493934 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.493959 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.494303 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.494320 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.494342 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.706038 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.706072 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.706366 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.706382 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.706391 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.706398 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.707956 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.707958 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.707986 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.708015 1038758 addons.go:475] Verifying addon metrics-server=true in "no-preload-603534"
	I0729 14:44:09.709729 1038758 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 14:44:09.711283 1038758 addons.go:510] duration metric: took 1.186289164s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 14:44:10.807976 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"False"
	I0729 14:44:13.300325 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"False"
	I0729 14:44:15.800886 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"False"
	I0729 14:44:18.300042 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"False"
	I0729 14:44:18.800080 1038758 pod_ready.go:92] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.800111 1038758 pod_ready.go:81] duration metric: took 10.006613711s for pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.800124 1038758 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-vn8z4" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.804949 1038758 pod_ready.go:92] pod "coredns-5cfdc65f69-vn8z4" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.804974 1038758 pod_ready.go:81] duration metric: took 4.840477ms for pod "coredns-5cfdc65f69-vn8z4" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.804985 1038758 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.810160 1038758 pod_ready.go:92] pod "etcd-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.810176 1038758 pod_ready.go:81] duration metric: took 5.184516ms for pod "etcd-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.810185 1038758 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.814785 1038758 pod_ready.go:92] pod "kube-apiserver-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.814807 1038758 pod_ready.go:81] duration metric: took 4.615516ms for pod "kube-apiserver-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.814819 1038758 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.819023 1038758 pod_ready.go:92] pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.819044 1038758 pod_ready.go:81] duration metric: took 4.215656ms for pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.819056 1038758 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7mr4z" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:19.198226 1038758 pod_ready.go:92] pod "kube-proxy-7mr4z" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:19.198252 1038758 pod_ready.go:81] duration metric: took 379.18928ms for pod "kube-proxy-7mr4z" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:19.198265 1038758 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:19.598783 1038758 pod_ready.go:92] pod "kube-scheduler-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:19.598824 1038758 pod_ready.go:81] duration metric: took 400.55255ms for pod "kube-scheduler-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:19.598835 1038758 pod_ready.go:38] duration metric: took 10.810240266s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:44:19.598865 1038758 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:44:19.598931 1038758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:44:19.615165 1038758 api_server.go:72] duration metric: took 11.090236578s to wait for apiserver process to appear ...
	I0729 14:44:19.615190 1038758 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:44:19.615211 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:44:19.619574 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0729 14:44:19.620586 1038758 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 14:44:19.620610 1038758 api_server.go:131] duration metric: took 5.412598ms to wait for apiserver health ...
	I0729 14:44:19.620620 1038758 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:44:19.802376 1038758 system_pods.go:59] 9 kube-system pods found
	I0729 14:44:19.802408 1038758 system_pods.go:61] "coredns-5cfdc65f69-m6q8r" [b3a0c38d-1587-4fdf-b2e6-58d364ca400b] Running
	I0729 14:44:19.802415 1038758 system_pods.go:61] "coredns-5cfdc65f69-vn8z4" [4654aadf-7870-46b6-96e6-5948239fbe22] Running
	I0729 14:44:19.802420 1038758 system_pods.go:61] "etcd-no-preload-603534" [01737765-56ad-4305-aa98-d531dd1fadb4] Running
	I0729 14:44:19.802429 1038758 system_pods.go:61] "kube-apiserver-no-preload-603534" [141fffbe-df4b-4de1-9d78-f1acf0b837a6] Running
	I0729 14:44:19.802434 1038758 system_pods.go:61] "kube-controller-manager-no-preload-603534" [39c980ec-50f7-4af1-b931-1a446775c934] Running
	I0729 14:44:19.802441 1038758 system_pods.go:61] "kube-proxy-7mr4z" [17de173c-2b95-4b35-a9d7-b38f065270cb] Running
	I0729 14:44:19.802446 1038758 system_pods.go:61] "kube-scheduler-no-preload-603534" [8d896d6c-43b9-4bc8-9994-41b0bd4b636d] Running
	I0729 14:44:19.802454 1038758 system_pods.go:61] "metrics-server-78fcd8795b-852x6" [637fea9b-2924-4593-a4e2-99a33ab613d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:44:19.802470 1038758 system_pods.go:61] "storage-provisioner" [7336eb38-d53d-4456-8367-cf843abe5cb5] Running
	I0729 14:44:19.802482 1038758 system_pods.go:74] duration metric: took 181.853357ms to wait for pod list to return data ...
	I0729 14:44:19.802491 1038758 default_sa.go:34] waiting for default service account to be created ...
	I0729 14:44:19.998312 1038758 default_sa.go:45] found service account: "default"
	I0729 14:44:19.998348 1038758 default_sa.go:55] duration metric: took 195.845187ms for default service account to be created ...
	I0729 14:44:19.998361 1038758 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 14:44:20.201742 1038758 system_pods.go:86] 9 kube-system pods found
	I0729 14:44:20.201778 1038758 system_pods.go:89] "coredns-5cfdc65f69-m6q8r" [b3a0c38d-1587-4fdf-b2e6-58d364ca400b] Running
	I0729 14:44:20.201787 1038758 system_pods.go:89] "coredns-5cfdc65f69-vn8z4" [4654aadf-7870-46b6-96e6-5948239fbe22] Running
	I0729 14:44:20.201793 1038758 system_pods.go:89] "etcd-no-preload-603534" [01737765-56ad-4305-aa98-d531dd1fadb4] Running
	I0729 14:44:20.201800 1038758 system_pods.go:89] "kube-apiserver-no-preload-603534" [141fffbe-df4b-4de1-9d78-f1acf0b837a6] Running
	I0729 14:44:20.201807 1038758 system_pods.go:89] "kube-controller-manager-no-preload-603534" [39c980ec-50f7-4af1-b931-1a446775c934] Running
	I0729 14:44:20.201812 1038758 system_pods.go:89] "kube-proxy-7mr4z" [17de173c-2b95-4b35-a9d7-b38f065270cb] Running
	I0729 14:44:20.201818 1038758 system_pods.go:89] "kube-scheduler-no-preload-603534" [8d896d6c-43b9-4bc8-9994-41b0bd4b636d] Running
	I0729 14:44:20.201826 1038758 system_pods.go:89] "metrics-server-78fcd8795b-852x6" [637fea9b-2924-4593-a4e2-99a33ab613d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:44:20.201835 1038758 system_pods.go:89] "storage-provisioner" [7336eb38-d53d-4456-8367-cf843abe5cb5] Running
	I0729 14:44:20.201850 1038758 system_pods.go:126] duration metric: took 203.481528ms to wait for k8s-apps to be running ...
	I0729 14:44:20.201860 1038758 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 14:44:20.201914 1038758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:44:20.217416 1038758 system_svc.go:56] duration metric: took 15.543768ms WaitForService to wait for kubelet
	I0729 14:44:20.217445 1038758 kubeadm.go:582] duration metric: took 11.692521258s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:44:20.217464 1038758 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:44:20.398667 1038758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:44:20.398696 1038758 node_conditions.go:123] node cpu capacity is 2
	I0729 14:44:20.398708 1038758 node_conditions.go:105] duration metric: took 181.238886ms to run NodePressure ...
	I0729 14:44:20.398720 1038758 start.go:241] waiting for startup goroutines ...
	I0729 14:44:20.398727 1038758 start.go:246] waiting for cluster config update ...
	I0729 14:44:20.398738 1038758 start.go:255] writing updated cluster config ...
	I0729 14:44:20.399014 1038758 ssh_runner.go:195] Run: rm -f paused
	I0729 14:44:20.452187 1038758 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 14:44:20.454434 1038758 out.go:177] * Done! kubectl is now configured to use "no-preload-603534" cluster and "default" namespace by default
	I0729 14:44:40.130597 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:44:40.130831 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:44:40.130848 1039759 kubeadm.go:310] 
	I0729 14:44:40.130903 1039759 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 14:44:40.130956 1039759 kubeadm.go:310] 		timed out waiting for the condition
	I0729 14:44:40.130966 1039759 kubeadm.go:310] 
	I0729 14:44:40.131032 1039759 kubeadm.go:310] 	This error is likely caused by:
	I0729 14:44:40.131110 1039759 kubeadm.go:310] 		- The kubelet is not running
	I0729 14:44:40.131256 1039759 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 14:44:40.131270 1039759 kubeadm.go:310] 
	I0729 14:44:40.131450 1039759 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 14:44:40.131499 1039759 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 14:44:40.131542 1039759 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 14:44:40.131552 1039759 kubeadm.go:310] 
	I0729 14:44:40.131686 1039759 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 14:44:40.131795 1039759 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 14:44:40.131806 1039759 kubeadm.go:310] 
	I0729 14:44:40.131947 1039759 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 14:44:40.132064 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 14:44:40.132162 1039759 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 14:44:40.132254 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 14:44:40.132264 1039759 kubeadm.go:310] 
	I0729 14:44:40.133208 1039759 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:44:40.133363 1039759 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 14:44:40.133468 1039759 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 14:44:40.133610 1039759 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 14:44:40.133676 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:44:40.607039 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:44:40.623771 1039759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:44:40.636278 1039759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:44:40.636310 1039759 kubeadm.go:157] found existing configuration files:
	
	I0729 14:44:40.636371 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:44:40.647768 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:44:40.647827 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:44:40.658281 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:44:40.668393 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:44:40.668477 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:44:40.678521 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:44:40.687891 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:44:40.687960 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:44:40.698384 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:44:40.708965 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:44:40.709047 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:44:40.719665 1039759 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:44:40.796786 1039759 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 14:44:40.796883 1039759 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:44:40.946106 1039759 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:44:40.946258 1039759 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:44:40.946388 1039759 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 14:44:41.140483 1039759 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:44:41.142390 1039759 out.go:204]   - Generating certificates and keys ...
	I0729 14:44:41.142503 1039759 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:44:41.142610 1039759 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:44:41.142722 1039759 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:44:41.142811 1039759 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:44:41.142910 1039759 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:44:41.142995 1039759 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:44:41.143092 1039759 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:44:41.143180 1039759 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:44:41.143279 1039759 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:44:41.143390 1039759 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:44:41.143445 1039759 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:44:41.143524 1039759 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:44:41.188854 1039759 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:44:41.329957 1039759 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:44:41.968599 1039759 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:44:42.034788 1039759 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:44:42.055543 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:44:42.056622 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:44:42.056715 1039759 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:44:42.204165 1039759 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:44:42.205935 1039759 out.go:204]   - Booting up control plane ...
	I0729 14:44:42.206076 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:44:42.216259 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:44:42.217947 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:44:42.219361 1039759 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:44:42.221672 1039759 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 14:45:22.223830 1039759 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 14:45:22.223940 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:22.224139 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:45:27.224303 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:27.224574 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:45:37.224905 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:37.225090 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:45:57.226285 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:57.226533 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:46:37.227279 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:46:37.227485 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:46:37.227494 1039759 kubeadm.go:310] 
	I0729 14:46:37.227528 1039759 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 14:46:37.227605 1039759 kubeadm.go:310] 		timed out waiting for the condition
	I0729 14:46:37.227627 1039759 kubeadm.go:310] 
	I0729 14:46:37.227683 1039759 kubeadm.go:310] 	This error is likely caused by:
	I0729 14:46:37.227732 1039759 kubeadm.go:310] 		- The kubelet is not running
	I0729 14:46:37.227861 1039759 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 14:46:37.227870 1039759 kubeadm.go:310] 
	I0729 14:46:37.228011 1039759 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 14:46:37.228093 1039759 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 14:46:37.228140 1039759 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 14:46:37.228173 1039759 kubeadm.go:310] 
	I0729 14:46:37.228310 1039759 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 14:46:37.228443 1039759 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 14:46:37.228454 1039759 kubeadm.go:310] 
	I0729 14:46:37.228606 1039759 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 14:46:37.228714 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 14:46:37.228821 1039759 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 14:46:37.228913 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 14:46:37.228934 1039759 kubeadm.go:310] 
	I0729 14:46:37.229926 1039759 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:46:37.230070 1039759 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 14:46:37.230175 1039759 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 14:46:37.230284 1039759 kubeadm.go:394] duration metric: took 7m57.608522587s to StartCluster
	I0729 14:46:37.230347 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:46:37.230435 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:46:37.276238 1039759 cri.go:89] found id: ""
	I0729 14:46:37.276294 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.276304 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:46:37.276317 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:46:37.276439 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:46:37.309934 1039759 cri.go:89] found id: ""
	I0729 14:46:37.309960 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.309969 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:46:37.309975 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:46:37.310031 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:46:37.343286 1039759 cri.go:89] found id: ""
	I0729 14:46:37.343312 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.343320 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:46:37.343325 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:46:37.343384 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:46:37.378735 1039759 cri.go:89] found id: ""
	I0729 14:46:37.378763 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.378773 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:46:37.378779 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:46:37.378834 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:46:37.414244 1039759 cri.go:89] found id: ""
	I0729 14:46:37.414275 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.414284 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:46:37.414290 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:46:37.414372 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:46:37.453809 1039759 cri.go:89] found id: ""
	I0729 14:46:37.453842 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.453858 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:46:37.453866 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:46:37.453955 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:46:37.492250 1039759 cri.go:89] found id: ""
	I0729 14:46:37.492279 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.492288 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:46:37.492294 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:46:37.492360 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:46:37.554342 1039759 cri.go:89] found id: ""
	I0729 14:46:37.554377 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.554388 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:46:37.554404 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:46:37.554422 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:46:37.631118 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:46:37.631165 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:46:37.650991 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:46:37.651047 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:46:37.731852 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:46:37.731880 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:46:37.731897 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:46:37.849049 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:46:37.849092 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0729 14:46:37.893957 1039759 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 14:46:37.894031 1039759 out.go:239] * 
	W0729 14:46:37.894120 1039759 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 14:46:37.894150 1039759 out.go:239] * 
	W0729 14:46:37.895278 1039759 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 14:46:37.898735 1039759 out.go:177] 
	W0729 14:46:37.900049 1039759 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 14:46:37.900115 1039759 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 14:46:37.900146 1039759 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 14:46:37.901531 1039759 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 14:51:35 embed-certs-668123 crio[724]: time="2024-07-29 14:51:35.784463297Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722264695784442848,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4f092554-629d-4168-9880-39851a33a2e8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:51:35 embed-certs-668123 crio[724]: time="2024-07-29 14:51:35.785046186Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=328fe1d0-0090-4633-8e69-130e8d6f4fef name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:51:35 embed-certs-668123 crio[724]: time="2024-07-29 14:51:35.785098350Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=328fe1d0-0090-4633-8e69-130e8d6f4fef name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:51:35 embed-certs-668123 crio[724]: time="2024-07-29 14:51:35.785822648Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a,PodSandboxId:ad2e8069a67618ba222da7b67134a2bdeed1234728f7ebf4c667e210942c1051,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722263917439411605,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecdab0df-406c-4f3c-b8fe-34a48b7f1e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 1dd23d1b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9730fdd74b65ac1bbf5e6b9ae441a2bbb0987220523c3c0196be708c4c0c3da9,PodSandboxId:d1bc6f5643615099eccc7360dfe694a10993162317d7374663b36b773c470a72,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722263895469474230,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: edbc7100-5ac6-4390-98cf-b25430811079,},Annotations:map[string]string{io.kubernetes.container.hash: 33062a48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d,PodSandboxId:48c4f3ee73cb82b7b98baafb48f72768686b27f2ca95c11caadbce4fc9168003,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722263894397531319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6dhzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c680e565-fe93-4072-8fe8-6fd440ae5675,},Annotations:map[string]string{io.kubernetes.container.hash: cf53aa0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b,PodSandboxId:fd6724a9aa4c34cf84b6252c053b4d91e97d7f6252a5c136f5353c6e4a84a751,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722263886696069519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2v79q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e43e850d-b94e-467c-b
f0f-0eac3828f54f,},Annotations:map[string]string{io.kubernetes.container.hash: 6a843a38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4,PodSandboxId:ad2e8069a67618ba222da7b67134a2bdeed1234728f7ebf4c667e210942c1051,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722263886661941181,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecdab0df-406c-4f3c-b8fe-34a48b7f1
e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 1dd23d1b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1,PodSandboxId:313ef259dc43d4445991f332076e128a7f0f959c520f250c1ffae2e6d8ebef3c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722263882191517261,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f505464a2b90053edb5a2e8c39af5afc,},Annotations:map[string]string{io.kub
ernetes.container.hash: 2ba1fd2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40,PodSandboxId:25492576dabe31fcb69f3da27658eb441756406366bbe433d4cd5f58dae3e1cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722263882204157437,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9adec07bd4fd19fe84094f223023c77,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8,PodSandboxId:93123498f71110244da3766451ff829fd57f09c4065fa24297b86581ab282783,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722263882187083721,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ae9160831d31b493d52099610c42660,},Annotations:map[string]string{io.kubernetes.container.hash:
49aa513b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322,PodSandboxId:fc46c20762ab46eaade5b501f697d4354d898f4c2a70d5720f8581c13496c0a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722263882181524173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832cf326908a8d72f3f9a2e90540c6ae,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=328fe1d0-0090-4633-8e69-130e8d6f4fef name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:51:35 embed-certs-668123 crio[724]: time="2024-07-29 14:51:35.830910227Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b2757732-943f-4af2-b882-f4646b95f8d6 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:51:35 embed-certs-668123 crio[724]: time="2024-07-29 14:51:35.830989134Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b2757732-943f-4af2-b882-f4646b95f8d6 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:51:35 embed-certs-668123 crio[724]: time="2024-07-29 14:51:35.832383208Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a3edc073-3f33-49f0-b806-ca13cfedf58e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:51:35 embed-certs-668123 crio[724]: time="2024-07-29 14:51:35.833234335Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722264695833204604,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3edc073-3f33-49f0-b806-ca13cfedf58e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:51:35 embed-certs-668123 crio[724]: time="2024-07-29 14:51:35.833901847Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a95d802-7cbd-4173-9fa2-396873b178b1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:51:35 embed-certs-668123 crio[724]: time="2024-07-29 14:51:35.833957556Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a95d802-7cbd-4173-9fa2-396873b178b1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:51:35 embed-certs-668123 crio[724]: time="2024-07-29 14:51:35.834138945Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a,PodSandboxId:ad2e8069a67618ba222da7b67134a2bdeed1234728f7ebf4c667e210942c1051,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722263917439411605,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecdab0df-406c-4f3c-b8fe-34a48b7f1e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 1dd23d1b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9730fdd74b65ac1bbf5e6b9ae441a2bbb0987220523c3c0196be708c4c0c3da9,PodSandboxId:d1bc6f5643615099eccc7360dfe694a10993162317d7374663b36b773c470a72,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722263895469474230,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: edbc7100-5ac6-4390-98cf-b25430811079,},Annotations:map[string]string{io.kubernetes.container.hash: 33062a48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d,PodSandboxId:48c4f3ee73cb82b7b98baafb48f72768686b27f2ca95c11caadbce4fc9168003,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722263894397531319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6dhzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c680e565-fe93-4072-8fe8-6fd440ae5675,},Annotations:map[string]string{io.kubernetes.container.hash: cf53aa0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b,PodSandboxId:fd6724a9aa4c34cf84b6252c053b4d91e97d7f6252a5c136f5353c6e4a84a751,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722263886696069519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2v79q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e43e850d-b94e-467c-b
f0f-0eac3828f54f,},Annotations:map[string]string{io.kubernetes.container.hash: 6a843a38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4,PodSandboxId:ad2e8069a67618ba222da7b67134a2bdeed1234728f7ebf4c667e210942c1051,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722263886661941181,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecdab0df-406c-4f3c-b8fe-34a48b7f1
e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 1dd23d1b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1,PodSandboxId:313ef259dc43d4445991f332076e128a7f0f959c520f250c1ffae2e6d8ebef3c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722263882191517261,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f505464a2b90053edb5a2e8c39af5afc,},Annotations:map[string]string{io.kub
ernetes.container.hash: 2ba1fd2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40,PodSandboxId:25492576dabe31fcb69f3da27658eb441756406366bbe433d4cd5f58dae3e1cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722263882204157437,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9adec07bd4fd19fe84094f223023c77,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8,PodSandboxId:93123498f71110244da3766451ff829fd57f09c4065fa24297b86581ab282783,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722263882187083721,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ae9160831d31b493d52099610c42660,},Annotations:map[string]string{io.kubernetes.container.hash:
49aa513b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322,PodSandboxId:fc46c20762ab46eaade5b501f697d4354d898f4c2a70d5720f8581c13496c0a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722263882181524173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832cf326908a8d72f3f9a2e90540c6ae,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a95d802-7cbd-4173-9fa2-396873b178b1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:51:35 embed-certs-668123 crio[724]: time="2024-07-29 14:51:35.873545145Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c2cbdb84-e1d5-4c3f-8ca3-f8c487ad1fca name=/runtime.v1.RuntimeService/Version
	Jul 29 14:51:35 embed-certs-668123 crio[724]: time="2024-07-29 14:51:35.873620676Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c2cbdb84-e1d5-4c3f-8ca3-f8c487ad1fca name=/runtime.v1.RuntimeService/Version
	Jul 29 14:51:35 embed-certs-668123 crio[724]: time="2024-07-29 14:51:35.875221389Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e7717a29-f145-4a55-b0d6-06768a7f937e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:51:35 embed-certs-668123 crio[724]: time="2024-07-29 14:51:35.875583284Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722264695875563310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e7717a29-f145-4a55-b0d6-06768a7f937e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:51:35 embed-certs-668123 crio[724]: time="2024-07-29 14:51:35.876243212Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8ec787bf-eadd-45a3-a2b2-9caac8a02a88 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:51:35 embed-certs-668123 crio[724]: time="2024-07-29 14:51:35.876302104Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8ec787bf-eadd-45a3-a2b2-9caac8a02a88 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:51:35 embed-certs-668123 crio[724]: time="2024-07-29 14:51:35.876482726Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a,PodSandboxId:ad2e8069a67618ba222da7b67134a2bdeed1234728f7ebf4c667e210942c1051,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722263917439411605,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecdab0df-406c-4f3c-b8fe-34a48b7f1e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 1dd23d1b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9730fdd74b65ac1bbf5e6b9ae441a2bbb0987220523c3c0196be708c4c0c3da9,PodSandboxId:d1bc6f5643615099eccc7360dfe694a10993162317d7374663b36b773c470a72,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722263895469474230,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: edbc7100-5ac6-4390-98cf-b25430811079,},Annotations:map[string]string{io.kubernetes.container.hash: 33062a48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d,PodSandboxId:48c4f3ee73cb82b7b98baafb48f72768686b27f2ca95c11caadbce4fc9168003,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722263894397531319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6dhzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c680e565-fe93-4072-8fe8-6fd440ae5675,},Annotations:map[string]string{io.kubernetes.container.hash: cf53aa0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b,PodSandboxId:fd6724a9aa4c34cf84b6252c053b4d91e97d7f6252a5c136f5353c6e4a84a751,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722263886696069519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2v79q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e43e850d-b94e-467c-b
f0f-0eac3828f54f,},Annotations:map[string]string{io.kubernetes.container.hash: 6a843a38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4,PodSandboxId:ad2e8069a67618ba222da7b67134a2bdeed1234728f7ebf4c667e210942c1051,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722263886661941181,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecdab0df-406c-4f3c-b8fe-34a48b7f1
e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 1dd23d1b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1,PodSandboxId:313ef259dc43d4445991f332076e128a7f0f959c520f250c1ffae2e6d8ebef3c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722263882191517261,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f505464a2b90053edb5a2e8c39af5afc,},Annotations:map[string]string{io.kub
ernetes.container.hash: 2ba1fd2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40,PodSandboxId:25492576dabe31fcb69f3da27658eb441756406366bbe433d4cd5f58dae3e1cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722263882204157437,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9adec07bd4fd19fe84094f223023c77,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8,PodSandboxId:93123498f71110244da3766451ff829fd57f09c4065fa24297b86581ab282783,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722263882187083721,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ae9160831d31b493d52099610c42660,},Annotations:map[string]string{io.kubernetes.container.hash:
49aa513b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322,PodSandboxId:fc46c20762ab46eaade5b501f697d4354d898f4c2a70d5720f8581c13496c0a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722263882181524173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832cf326908a8d72f3f9a2e90540c6ae,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8ec787bf-eadd-45a3-a2b2-9caac8a02a88 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:51:35 embed-certs-668123 crio[724]: time="2024-07-29 14:51:35.912820437Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e87756d6-ae80-4399-b1a0-32bfffd15639 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:51:35 embed-certs-668123 crio[724]: time="2024-07-29 14:51:35.912973794Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e87756d6-ae80-4399-b1a0-32bfffd15639 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:51:35 embed-certs-668123 crio[724]: time="2024-07-29 14:51:35.914698693Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=36065136-8932-4cb1-971f-8a7dc6fa91e9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:51:35 embed-certs-668123 crio[724]: time="2024-07-29 14:51:35.915125194Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722264695915105407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=36065136-8932-4cb1-971f-8a7dc6fa91e9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:51:35 embed-certs-668123 crio[724]: time="2024-07-29 14:51:35.915830802Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e68e85ce-c773-4e6d-932b-0811d37c8e43 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:51:35 embed-certs-668123 crio[724]: time="2024-07-29 14:51:35.915881813Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e68e85ce-c773-4e6d-932b-0811d37c8e43 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:51:35 embed-certs-668123 crio[724]: time="2024-07-29 14:51:35.916091323Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a,PodSandboxId:ad2e8069a67618ba222da7b67134a2bdeed1234728f7ebf4c667e210942c1051,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722263917439411605,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecdab0df-406c-4f3c-b8fe-34a48b7f1e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 1dd23d1b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9730fdd74b65ac1bbf5e6b9ae441a2bbb0987220523c3c0196be708c4c0c3da9,PodSandboxId:d1bc6f5643615099eccc7360dfe694a10993162317d7374663b36b773c470a72,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722263895469474230,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: edbc7100-5ac6-4390-98cf-b25430811079,},Annotations:map[string]string{io.kubernetes.container.hash: 33062a48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d,PodSandboxId:48c4f3ee73cb82b7b98baafb48f72768686b27f2ca95c11caadbce4fc9168003,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722263894397531319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6dhzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c680e565-fe93-4072-8fe8-6fd440ae5675,},Annotations:map[string]string{io.kubernetes.container.hash: cf53aa0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b,PodSandboxId:fd6724a9aa4c34cf84b6252c053b4d91e97d7f6252a5c136f5353c6e4a84a751,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722263886696069519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2v79q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e43e850d-b94e-467c-b
f0f-0eac3828f54f,},Annotations:map[string]string{io.kubernetes.container.hash: 6a843a38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4,PodSandboxId:ad2e8069a67618ba222da7b67134a2bdeed1234728f7ebf4c667e210942c1051,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722263886661941181,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecdab0df-406c-4f3c-b8fe-34a48b7f1
e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 1dd23d1b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1,PodSandboxId:313ef259dc43d4445991f332076e128a7f0f959c520f250c1ffae2e6d8ebef3c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722263882191517261,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f505464a2b90053edb5a2e8c39af5afc,},Annotations:map[string]string{io.kub
ernetes.container.hash: 2ba1fd2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40,PodSandboxId:25492576dabe31fcb69f3da27658eb441756406366bbe433d4cd5f58dae3e1cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722263882204157437,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9adec07bd4fd19fe84094f223023c77,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8,PodSandboxId:93123498f71110244da3766451ff829fd57f09c4065fa24297b86581ab282783,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722263882187083721,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ae9160831d31b493d52099610c42660,},Annotations:map[string]string{io.kubernetes.container.hash:
49aa513b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322,PodSandboxId:fc46c20762ab46eaade5b501f697d4354d898f4c2a70d5720f8581c13496c0a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722263882181524173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832cf326908a8d72f3f9a2e90540c6ae,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e68e85ce-c773-4e6d-932b-0811d37c8e43 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bb9e633119b91       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   ad2e8069a6761       storage-provisioner
	9730fdd74b65a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   d1bc6f5643615       busybox
	cce96789d197c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   48c4f3ee73cb8       coredns-7db6d8ff4d-6dhzz
	1a12022d9b8d8       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago      Running             kube-proxy                1                   fd6724a9aa4c3       kube-proxy-2v79q
	40292615dffc7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   ad2e8069a6761       storage-provisioner
	ed34fb84b9098       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago      Running             kube-scheduler            1                   25492576dabe3       kube-scheduler-embed-certs-668123
	759428588e36e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   313ef259dc43d       etcd-embed-certs-668123
	0e342f5e4bb06       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      13 minutes ago      Running             kube-apiserver            1                   93123498f7111       kube-apiserver-embed-certs-668123
	d2573d61839fb       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      13 minutes ago      Running             kube-controller-manager   1                   fc46c20762ab4       kube-controller-manager-embed-certs-668123
	
	
	==> coredns [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40987 - 58230 "HINFO IN 8956371880180969171.6547811078675431536. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016375587s
	
	
	==> describe nodes <==
	Name:               embed-certs-668123
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-668123
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=embed-certs-668123
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T14_30_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 14:30:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-668123
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 14:51:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 14:48:48 +0000   Mon, 29 Jul 2024 14:30:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 14:48:48 +0000   Mon, 29 Jul 2024 14:30:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 14:48:48 +0000   Mon, 29 Jul 2024 14:30:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 14:48:48 +0000   Mon, 29 Jul 2024 14:38:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.53
	  Hostname:    embed-certs-668123
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 624dda01c3c740c99fa2a7a21b4ad9e8
	  System UUID:                624dda01-c3c7-40c9-9fa2-a7a21b4ad9e8
	  Boot ID:                    9a80de19-1697-4aaa-b7b0-a87331c1439a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-7db6d8ff4d-6dhzz                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-embed-certs-668123                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-embed-certs-668123             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-embed-certs-668123    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-2v79q                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-embed-certs-668123             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-569cc877fc-5msnp               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m                kubelet          Node embed-certs-668123 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node embed-certs-668123 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-668123 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node embed-certs-668123 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-668123 event: Registered Node embed-certs-668123 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-668123 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-668123 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-668123 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-668123 event: Registered Node embed-certs-668123 in Controller
	
	
	==> dmesg <==
	[Jul29 14:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050125] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039160] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.762991] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.506538] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.544117] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.778255] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.057161] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055886] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.221635] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.134279] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[  +0.313673] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[  +4.284325] systemd-fstab-generator[807]: Ignoring "noauto" option for root device
	[  +0.069578] kauditd_printk_skb: 130 callbacks suppressed
	[Jul29 14:38] systemd-fstab-generator[930]: Ignoring "noauto" option for root device
	[  +5.660972] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.323557] systemd-fstab-generator[1530]: Ignoring "noauto" option for root device
	[  +3.310534] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.155515] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1] <==
	{"level":"info","ts":"2024-07-29T14:38:19.958246Z","caller":"traceutil/trace.go:171","msg":"trace[1442572305] range","detail":"{range_begin:/registry/pods/kube-system/etcd-embed-certs-668123; range_end:; response_count:1; response_revision:592; }","duration":"394.457564ms","start":"2024-07-29T14:38:19.563783Z","end":"2024-07-29T14:38:19.95824Z","steps":["trace[1442572305] 'agreement among raft nodes before linearized reading'  (duration: 394.369885ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T14:38:19.958264Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T14:38:19.563709Z","time spent":"394.550186ms","remote":"127.0.0.1:59218","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":5598,"request content":"key:\"/registry/pods/kube-system/etcd-embed-certs-668123\" "}
	{"level":"warn","ts":"2024-07-29T14:38:20.763466Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"354.789465ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5152277398289108330 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:592 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-29T14:38:20.763662Z","caller":"traceutil/trace.go:171","msg":"trace[1460552915] transaction","detail":"{read_only:false; response_revision:593; number_of_response:1; }","duration":"788.135762ms","start":"2024-07-29T14:38:19.975507Z","end":"2024-07-29T14:38:20.763643Z","steps":["trace[1460552915] 'process raft request'  (duration: 433.059304ms)","trace[1460552915] 'compare'  (duration: 354.324382ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T14:38:20.763806Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T14:38:19.975496Z","time spent":"788.199241ms","remote":"127.0.0.1:59482","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4118,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:592 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"info","ts":"2024-07-29T14:38:20.766376Z","caller":"traceutil/trace.go:171","msg":"trace[971957081] linearizableReadLoop","detail":"{readStateIndex:638; appliedIndex:636; }","duration":"702.530021ms","start":"2024-07-29T14:38:20.063832Z","end":"2024-07-29T14:38:20.766362Z","steps":["trace[971957081] 'read index received'  (duration: 344.80563ms)","trace[971957081] 'applied index is now lower than readState.Index'  (duration: 357.723589ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T14:38:20.766553Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"702.708719ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-embed-certs-668123\" ","response":"range_response_count:1 size:5576"}
	{"level":"info","ts":"2024-07-29T14:38:20.766604Z","caller":"traceutil/trace.go:171","msg":"trace[366005341] range","detail":"{range_begin:/registry/pods/kube-system/etcd-embed-certs-668123; range_end:; response_count:1; response_revision:594; }","duration":"702.766729ms","start":"2024-07-29T14:38:20.063828Z","end":"2024-07-29T14:38:20.766595Z","steps":["trace[366005341] 'agreement among raft nodes before linearized reading'  (duration: 702.608634ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T14:38:20.766634Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T14:38:20.063793Z","time spent":"702.832145ms","remote":"127.0.0.1:59218","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":5598,"request content":"key:\"/registry/pods/kube-system/etcd-embed-certs-668123\" "}
	{"level":"info","ts":"2024-07-29T14:38:20.767167Z","caller":"traceutil/trace.go:171","msg":"trace[1984292092] transaction","detail":"{read_only:false; response_revision:594; number_of_response:1; }","duration":"790.048712ms","start":"2024-07-29T14:38:19.977107Z","end":"2024-07-29T14:38:20.767155Z","steps":["trace[1984292092] 'process raft request'  (duration: 789.178838ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T14:38:20.76727Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T14:38:19.977096Z","time spent":"790.125595ms","remote":"127.0.0.1:59218","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6889,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-668123\" mod_revision:539 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-668123\" value_size:6821 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-668123\" > >"}
	{"level":"info","ts":"2024-07-29T14:38:38.792578Z","caller":"traceutil/trace.go:171","msg":"trace[1794317838] transaction","detail":"{read_only:false; response_revision:616; number_of_response:1; }","duration":"364.588635ms","start":"2024-07-29T14:38:38.427973Z","end":"2024-07-29T14:38:38.792561Z","steps":["trace[1794317838] 'process raft request'  (duration: 364.4651ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T14:38:38.79285Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T14:38:38.427956Z","time spent":"364.762171ms","remote":"127.0.0.1:59218","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3817,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:613 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:3763 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >"}
	{"level":"warn","ts":"2024-07-29T14:38:57.126195Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.29983ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5152277398289108623 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-668123\" mod_revision:618 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-668123\" value_size:502 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-668123\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-29T14:38:57.126369Z","caller":"traceutil/trace.go:171","msg":"trace[2088967150] linearizableReadLoop","detail":"{readStateIndex:681; appliedIndex:679; }","duration":"127.128844ms","start":"2024-07-29T14:38:56.999227Z","end":"2024-07-29T14:38:57.126356Z","steps":["trace[2088967150] 'read index received'  (duration: 124.263244ms)","trace[2088967150] 'applied index is now lower than readState.Index'  (duration: 2.863973ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T14:38:57.126501Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.283633ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1118"}
	{"level":"info","ts":"2024-07-29T14:38:57.126539Z","caller":"traceutil/trace.go:171","msg":"trace[43429701] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:628; }","duration":"127.348743ms","start":"2024-07-29T14:38:56.999183Z","end":"2024-07-29T14:38:57.126532Z","steps":["trace[43429701] 'agreement among raft nodes before linearized reading'  (duration: 127.206099ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T14:38:57.126893Z","caller":"traceutil/trace.go:171","msg":"trace[1282939821] transaction","detail":"{read_only:false; response_revision:627; number_of_response:1; }","duration":"320.758999ms","start":"2024-07-29T14:38:56.806119Z","end":"2024-07-29T14:38:57.126878Z","steps":["trace[1282939821] 'process raft request'  (duration: 191.696128ms)","trace[1282939821] 'compare'  (duration: 128.182348ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T14:38:57.126997Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T14:38:56.806103Z","time spent":"320.851063ms","remote":"127.0.0.1:59272","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":561,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-668123\" mod_revision:618 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-668123\" value_size:502 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-668123\" > >"}
	{"level":"info","ts":"2024-07-29T14:38:57.127156Z","caller":"traceutil/trace.go:171","msg":"trace[536823810] transaction","detail":"{read_only:false; response_revision:628; number_of_response:1; }","duration":"254.332486ms","start":"2024-07-29T14:38:56.872817Z","end":"2024-07-29T14:38:57.12715Z","steps":["trace[536823810] 'process raft request'  (duration: 253.489233ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T14:39:03.47216Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.663068ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-5msnp\" ","response":"range_response_count:1 size:4281"}
	{"level":"info","ts":"2024-07-29T14:39:03.472222Z","caller":"traceutil/trace.go:171","msg":"trace[1321925536] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-5msnp; range_end:; response_count:1; response_revision:635; }","duration":"190.760888ms","start":"2024-07-29T14:39:03.281446Z","end":"2024-07-29T14:39:03.472207Z","steps":["trace[1321925536] 'range keys from in-memory index tree'  (duration: 190.558822ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T14:48:04.333937Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":840}
	{"level":"info","ts":"2024-07-29T14:48:04.343874Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":840,"took":"9.561259ms","hash":671835366,"current-db-size-bytes":2613248,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2613248,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-07-29T14:48:04.343955Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":671835366,"revision":840,"compact-revision":-1}
	
	
	==> kernel <==
	 14:51:36 up 13 min,  0 users,  load average: 0.15, 0.19, 0.11
	Linux embed-certs-668123 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8] <==
	I0729 14:46:06.632441       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 14:48:05.633786       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 14:48:05.634095       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0729 14:48:06.634692       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 14:48:06.634844       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 14:48:06.634853       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 14:48:06.634692       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 14:48:06.634879       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 14:48:06.636977       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 14:49:06.635796       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 14:49:06.636121       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 14:49:06.636169       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 14:49:06.638119       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 14:49:06.638165       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 14:49:06.638177       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 14:51:06.636716       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 14:51:06.637151       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 14:51:06.637188       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 14:51:06.639151       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 14:51:06.639200       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 14:51:06.639209       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322] <==
	I0729 14:45:49.093166       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:46:18.607380       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:46:19.100310       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:46:48.612270       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:46:49.108559       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:47:18.617558       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:47:19.116165       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:47:48.622094       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:47:49.125989       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:48:18.627095       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:48:19.134391       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:48:48.633856       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:48:49.141904       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 14:49:05.207136       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="444.412µs"
	I0729 14:49:17.197537       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="203.193µs"
	E0729 14:49:18.639592       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:49:19.149002       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:49:48.644466       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:49:49.157962       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:50:18.649140       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:50:19.165628       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:50:48.656553       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:50:49.175224       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:51:18.666058       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:51:19.186023       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b] <==
	I0729 14:38:06.910479       1 server_linux.go:69] "Using iptables proxy"
	I0729 14:38:06.920558       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.53"]
	I0729 14:38:06.957526       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 14:38:06.957609       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 14:38:06.957638       1 server_linux.go:165] "Using iptables Proxier"
	I0729 14:38:06.960330       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 14:38:06.960565       1 server.go:872] "Version info" version="v1.30.3"
	I0729 14:38:06.960609       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 14:38:06.962144       1 config.go:192] "Starting service config controller"
	I0729 14:38:06.962189       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 14:38:06.962228       1 config.go:101] "Starting endpoint slice config controller"
	I0729 14:38:06.962244       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 14:38:06.962572       1 config.go:319] "Starting node config controller"
	I0729 14:38:06.962608       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 14:38:07.063273       1 shared_informer.go:320] Caches are synced for node config
	I0729 14:38:07.063374       1 shared_informer.go:320] Caches are synced for service config
	I0729 14:38:07.063391       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40] <==
	I0729 14:38:03.303024       1 serving.go:380] Generated self-signed cert in-memory
	W0729 14:38:05.590466       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 14:38:05.590573       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 14:38:05.590600       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 14:38:05.590607       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 14:38:05.646707       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 14:38:05.651257       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 14:38:05.653101       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 14:38:05.653238       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 14:38:05.653266       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 14:38:05.653280       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 14:38:05.753479       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 14:49:01 embed-certs-668123 kubelet[937]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 14:49:01 embed-certs-668123 kubelet[937]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 14:49:01 embed-certs-668123 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 14:49:05 embed-certs-668123 kubelet[937]: E0729 14:49:05.186434     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5msnp" podUID="eb9cd6f7-caf5-4b18-b0d6-0f01add839ce"
	Jul 29 14:49:17 embed-certs-668123 kubelet[937]: E0729 14:49:17.183256     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5msnp" podUID="eb9cd6f7-caf5-4b18-b0d6-0f01add839ce"
	Jul 29 14:49:30 embed-certs-668123 kubelet[937]: E0729 14:49:30.183200     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5msnp" podUID="eb9cd6f7-caf5-4b18-b0d6-0f01add839ce"
	Jul 29 14:49:45 embed-certs-668123 kubelet[937]: E0729 14:49:45.183397     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5msnp" podUID="eb9cd6f7-caf5-4b18-b0d6-0f01add839ce"
	Jul 29 14:50:00 embed-certs-668123 kubelet[937]: E0729 14:50:00.183802     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5msnp" podUID="eb9cd6f7-caf5-4b18-b0d6-0f01add839ce"
	Jul 29 14:50:01 embed-certs-668123 kubelet[937]: E0729 14:50:01.218313     937 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 14:50:01 embed-certs-668123 kubelet[937]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 14:50:01 embed-certs-668123 kubelet[937]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 14:50:01 embed-certs-668123 kubelet[937]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 14:50:01 embed-certs-668123 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 14:50:14 embed-certs-668123 kubelet[937]: E0729 14:50:14.183448     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5msnp" podUID="eb9cd6f7-caf5-4b18-b0d6-0f01add839ce"
	Jul 29 14:50:27 embed-certs-668123 kubelet[937]: E0729 14:50:27.183539     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5msnp" podUID="eb9cd6f7-caf5-4b18-b0d6-0f01add839ce"
	Jul 29 14:50:39 embed-certs-668123 kubelet[937]: E0729 14:50:39.183624     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5msnp" podUID="eb9cd6f7-caf5-4b18-b0d6-0f01add839ce"
	Jul 29 14:50:52 embed-certs-668123 kubelet[937]: E0729 14:50:52.183942     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5msnp" podUID="eb9cd6f7-caf5-4b18-b0d6-0f01add839ce"
	Jul 29 14:51:01 embed-certs-668123 kubelet[937]: E0729 14:51:01.218145     937 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 14:51:01 embed-certs-668123 kubelet[937]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 14:51:01 embed-certs-668123 kubelet[937]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 14:51:01 embed-certs-668123 kubelet[937]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 14:51:01 embed-certs-668123 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 14:51:04 embed-certs-668123 kubelet[937]: E0729 14:51:04.183590     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5msnp" podUID="eb9cd6f7-caf5-4b18-b0d6-0f01add839ce"
	Jul 29 14:51:18 embed-certs-668123 kubelet[937]: E0729 14:51:18.183575     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5msnp" podUID="eb9cd6f7-caf5-4b18-b0d6-0f01add839ce"
	Jul 29 14:51:30 embed-certs-668123 kubelet[937]: E0729 14:51:30.183441     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5msnp" podUID="eb9cd6f7-caf5-4b18-b0d6-0f01add839ce"
	
	
	==> storage-provisioner [40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4] <==
	I0729 14:38:06.840937       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0729 14:38:36.845691       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a] <==
	I0729 14:38:37.575618       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 14:38:37.585149       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 14:38:37.585239       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 14:38:54.987972       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 14:38:54.988350       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-668123_2eb9da7e-9d3b-4756-9d53-8e848f523f15!
	I0729 14:38:54.989156       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4d40ab2b-13cf-41cb-bc8c-a2b36c4772e4", APIVersion:"v1", ResourceVersion:"624", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-668123_2eb9da7e-9d3b-4756-9d53-8e848f523f15 became leader
	I0729 14:38:55.089840       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-668123_2eb9da7e-9d3b-4756-9d53-8e848f523f15!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-668123 -n embed-certs-668123
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-668123 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-5msnp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-668123 describe pod metrics-server-569cc877fc-5msnp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-668123 describe pod metrics-server-569cc877fc-5msnp: exit status 1 (62.360406ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-5msnp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-668123 describe pod metrics-server-569cc877fc-5msnp: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0729 14:43:55.652273  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/flannel-513289/client.crt: no such file or directory
E0729 14:44:00.040825  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/bridge-513289/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-751306 -n default-k8s-diff-port-751306
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-29 14:52:39.260903066 +0000 UTC m=+6045.668626834
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-751306 -n default-k8s-diff-port-751306
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-751306 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-751306 logs -n 25: (2.112551932s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-513289 sudo cat                             | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo                                 | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo                                 | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo                                 | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo find                            | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo crio                            | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-513289                                      | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	| delete  | -p                                                     | disable-driver-mounts-054967 | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | disable-driver-mounts-054967                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:31 UTC |
	|         | default-k8s-diff-port-751306                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-603534             | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:30 UTC | 29 Jul 24 14:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-603534                                   | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-668123            | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC | 29 Jul 24 14:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-668123                                  | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-751306  | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC | 29 Jul 24 14:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC |                     |
	|         | default-k8s-diff-port-751306                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-603534                  | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-603534 --memory=2200                     | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:32 UTC | 29 Jul 24 14:44 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-360866        | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:33 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-668123                 | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-668123                                  | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:33 UTC | 29 Jul 24 14:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-751306       | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC | 29 Jul 24 14:43 UTC |
	|         | default-k8s-diff-port-751306                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-360866                              | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC | 29 Jul 24 14:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-360866             | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC | 29 Jul 24 14:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-360866                              | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 14:34:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 14:34:53.874295 1039759 out.go:291] Setting OutFile to fd 1 ...
	I0729 14:34:53.874567 1039759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:34:53.874577 1039759 out.go:304] Setting ErrFile to fd 2...
	I0729 14:34:53.874580 1039759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:34:53.874774 1039759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 14:34:53.875294 1039759 out.go:298] Setting JSON to false
	I0729 14:34:53.876313 1039759 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":15446,"bootTime":1722248248,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 14:34:53.876373 1039759 start.go:139] virtualization: kvm guest
	I0729 14:34:53.878446 1039759 out.go:177] * [old-k8s-version-360866] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 14:34:53.879820 1039759 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 14:34:53.879855 1039759 notify.go:220] Checking for updates...
	I0729 14:34:53.882201 1039759 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 14:34:53.883330 1039759 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:34:53.884514 1039759 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:34:53.885734 1039759 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 14:34:53.886894 1039759 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 14:34:53.888361 1039759 config.go:182] Loaded profile config "old-k8s-version-360866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 14:34:53.888789 1039759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:34:53.888850 1039759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:34:53.903960 1039759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I0729 14:34:53.904467 1039759 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:34:53.905083 1039759 main.go:141] libmachine: Using API Version  1
	I0729 14:34:53.905112 1039759 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:34:53.905449 1039759 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:34:53.905609 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:34:53.907360 1039759 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 14:34:53.908710 1039759 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 14:34:53.909026 1039759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:34:53.909064 1039759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:34:53.923834 1039759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0729 14:34:53.924300 1039759 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:34:53.924787 1039759 main.go:141] libmachine: Using API Version  1
	I0729 14:34:53.924809 1039759 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:34:53.925150 1039759 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:34:53.925352 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:34:53.960368 1039759 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 14:34:53.961649 1039759 start.go:297] selected driver: kvm2
	I0729 14:34:53.961662 1039759 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:34:53.961778 1039759 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 14:34:53.962398 1039759 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:34:53.962459 1039759 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19338-974764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 14:34:53.977941 1039759 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 14:34:53.978311 1039759 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:34:53.978341 1039759 cni.go:84] Creating CNI manager for ""
	I0729 14:34:53.978350 1039759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:34:53.978395 1039759 start.go:340] cluster config:
	{Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:34:53.978499 1039759 iso.go:125] acquiring lock: {Name:mk2bc72146110e230952d77b90cad2ea8182c9d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:34:53.980167 1039759 out.go:177] * Starting "old-k8s-version-360866" primary control-plane node in "old-k8s-version-360866" cluster
	I0729 14:34:55.588663 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:34:53.981356 1039759 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 14:34:53.981390 1039759 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 14:34:53.981400 1039759 cache.go:56] Caching tarball of preloaded images
	I0729 14:34:53.981477 1039759 preload.go:172] Found /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 14:34:53.981487 1039759 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 14:34:53.981600 1039759 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/config.json ...
	I0729 14:34:53.981775 1039759 start.go:360] acquireMachinesLock for old-k8s-version-360866: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 14:34:58.660730 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:04.740665 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:07.812781 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:13.892659 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:16.964692 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:23.044749 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:26.116761 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:32.196730 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:35.268709 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:41.348712 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:44.420693 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:50.500715 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:53.572717 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:59.652707 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:02.724722 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:08.804719 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:11.876665 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:17.956684 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:21.028707 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:27.108667 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:30.180710 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:36.260645 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:39.332717 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:45.412694 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:48.484713 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:54.564703 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:57.636707 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:03.716690 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:06.788660 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:12.868658 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:15.940708 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:22.020684 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:25.092712 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:31.172710 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:34.177216 1039263 start.go:364] duration metric: took 3m42.890532077s to acquireMachinesLock for "embed-certs-668123"
	I0729 14:37:34.177291 1039263 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:37:34.177300 1039263 fix.go:54] fixHost starting: 
	I0729 14:37:34.177641 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:37:34.177673 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:37:34.193427 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37577
	I0729 14:37:34.193879 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:37:34.194396 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:37:34.194421 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:37:34.194774 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:37:34.194987 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:34.195156 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:37:34.196597 1039263 fix.go:112] recreateIfNeeded on embed-certs-668123: state=Stopped err=<nil>
	I0729 14:37:34.196642 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	W0729 14:37:34.196802 1039263 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:37:34.198564 1039263 out.go:177] * Restarting existing kvm2 VM for "embed-certs-668123" ...
	I0729 14:37:34.199926 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Start
	I0729 14:37:34.200086 1039263 main.go:141] libmachine: (embed-certs-668123) Ensuring networks are active...
	I0729 14:37:34.200833 1039263 main.go:141] libmachine: (embed-certs-668123) Ensuring network default is active
	I0729 14:37:34.201159 1039263 main.go:141] libmachine: (embed-certs-668123) Ensuring network mk-embed-certs-668123 is active
	I0729 14:37:34.201578 1039263 main.go:141] libmachine: (embed-certs-668123) Getting domain xml...
	I0729 14:37:34.202214 1039263 main.go:141] libmachine: (embed-certs-668123) Creating domain...
	I0729 14:37:34.510575 1039263 main.go:141] libmachine: (embed-certs-668123) Waiting to get IP...
	I0729 14:37:34.511459 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:34.511909 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:34.512006 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:34.511904 1040307 retry.go:31] will retry after 294.890973ms: waiting for machine to come up
	I0729 14:37:34.808513 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:34.809044 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:34.809070 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:34.809007 1040307 retry.go:31] will retry after 296.152247ms: waiting for machine to come up
	I0729 14:37:35.106423 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:35.106839 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:35.106872 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:35.106773 1040307 retry.go:31] will retry after 384.830082ms: waiting for machine to come up
	I0729 14:37:35.493463 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:35.493902 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:35.493933 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:35.493861 1040307 retry.go:31] will retry after 490.673812ms: waiting for machine to come up
	I0729 14:37:35.986675 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:35.987184 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:35.987235 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:35.987099 1040307 retry.go:31] will retry after 725.022775ms: waiting for machine to come up
	I0729 14:37:34.174673 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:37:34.174713 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:37:34.175060 1038758 buildroot.go:166] provisioning hostname "no-preload-603534"
	I0729 14:37:34.175084 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:37:34.175279 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:37:34.177042 1038758 machine.go:97] duration metric: took 4m37.39644293s to provisionDockerMachine
	I0729 14:37:34.177087 1038758 fix.go:56] duration metric: took 4m37.417815827s for fixHost
	I0729 14:37:34.177094 1038758 start.go:83] releasing machines lock for "no-preload-603534", held for 4m37.417912853s
	W0729 14:37:34.177127 1038758 start.go:714] error starting host: provision: host is not running
	W0729 14:37:34.177230 1038758 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 14:37:34.177240 1038758 start.go:729] Will try again in 5 seconds ...
	I0729 14:37:36.714078 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:36.714502 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:36.714565 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:36.714389 1040307 retry.go:31] will retry after 722.684756ms: waiting for machine to come up
	I0729 14:37:37.438316 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:37.438859 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:37.438891 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:37.438802 1040307 retry.go:31] will retry after 1.163999997s: waiting for machine to come up
	I0729 14:37:38.604109 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:38.604503 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:38.604531 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:38.604469 1040307 retry.go:31] will retry after 1.401566003s: waiting for machine to come up
	I0729 14:37:40.007310 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:40.007900 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:40.007929 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:40.007839 1040307 retry.go:31] will retry after 1.40470791s: waiting for machine to come up
	I0729 14:37:39.178982 1038758 start.go:360] acquireMachinesLock for no-preload-603534: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 14:37:41.414509 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:41.415018 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:41.415049 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:41.414959 1040307 retry.go:31] will retry after 2.205183048s: waiting for machine to come up
	I0729 14:37:43.623427 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:43.623894 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:43.623922 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:43.623856 1040307 retry.go:31] will retry after 2.444881913s: waiting for machine to come up
	I0729 14:37:46.070961 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:46.071314 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:46.071338 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:46.071271 1040307 retry.go:31] will retry after 3.115189863s: waiting for machine to come up
	I0729 14:37:49.187610 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:49.188107 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:49.188134 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:49.188054 1040307 retry.go:31] will retry after 3.139484284s: waiting for machine to come up
	I0729 14:37:53.653416 1039440 start.go:364] duration metric: took 3m41.12464482s to acquireMachinesLock for "default-k8s-diff-port-751306"
	I0729 14:37:53.653486 1039440 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:37:53.653494 1039440 fix.go:54] fixHost starting: 
	I0729 14:37:53.653880 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:37:53.653913 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:37:53.671499 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34797
	I0729 14:37:53.671927 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:37:53.672550 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:37:53.672584 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:37:53.672986 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:37:53.673198 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:37:53.673353 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:37:53.674706 1039440 fix.go:112] recreateIfNeeded on default-k8s-diff-port-751306: state=Stopped err=<nil>
	I0729 14:37:53.674736 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	W0729 14:37:53.674896 1039440 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:37:53.677098 1039440 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-751306" ...
	I0729 14:37:52.329477 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.329880 1039263 main.go:141] libmachine: (embed-certs-668123) Found IP for machine: 192.168.50.53
	I0729 14:37:52.329895 1039263 main.go:141] libmachine: (embed-certs-668123) Reserving static IP address...
	I0729 14:37:52.329906 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has current primary IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.330376 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "embed-certs-668123", mac: "52:54:00:a3:92:a4", ip: "192.168.50.53"} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.330414 1039263 main.go:141] libmachine: (embed-certs-668123) Reserved static IP address: 192.168.50.53
	I0729 14:37:52.330433 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | skip adding static IP to network mk-embed-certs-668123 - found existing host DHCP lease matching {name: "embed-certs-668123", mac: "52:54:00:a3:92:a4", ip: "192.168.50.53"}
	I0729 14:37:52.330453 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Getting to WaitForSSH function...
	I0729 14:37:52.330465 1039263 main.go:141] libmachine: (embed-certs-668123) Waiting for SSH to be available...
	I0729 14:37:52.332510 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.332794 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.332821 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.332897 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Using SSH client type: external
	I0729 14:37:52.332931 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa (-rw-------)
	I0729 14:37:52.332963 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:37:52.332976 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | About to run SSH command:
	I0729 14:37:52.332989 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | exit 0
	I0729 14:37:52.456152 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | SSH cmd err, output: <nil>: 
	I0729 14:37:52.456532 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetConfigRaw
	I0729 14:37:52.457156 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetIP
	I0729 14:37:52.459620 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.459946 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.459980 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.460200 1039263 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/config.json ...
	I0729 14:37:52.460384 1039263 machine.go:94] provisionDockerMachine start ...
	I0729 14:37:52.460404 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:52.460672 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:52.462798 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.463089 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.463119 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.463260 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:52.463428 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.463594 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.463703 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:52.463856 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:52.464071 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:52.464080 1039263 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:37:52.564925 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 14:37:52.564959 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetMachineName
	I0729 14:37:52.565214 1039263 buildroot.go:166] provisioning hostname "embed-certs-668123"
	I0729 14:37:52.565241 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetMachineName
	I0729 14:37:52.565472 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:52.568131 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.568450 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.568482 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.568615 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:52.568825 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.568975 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.569143 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:52.569335 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:52.569511 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:52.569522 1039263 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-668123 && echo "embed-certs-668123" | sudo tee /etc/hostname
	I0729 14:37:52.686424 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-668123
	
	I0729 14:37:52.686459 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:52.689074 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.689387 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.689422 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.689619 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:52.689825 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.689999 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.690164 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:52.690338 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:52.690511 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:52.690526 1039263 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-668123' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-668123/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-668123' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:37:52.801778 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:37:52.801812 1039263 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:37:52.801841 1039263 buildroot.go:174] setting up certificates
	I0729 14:37:52.801851 1039263 provision.go:84] configureAuth start
	I0729 14:37:52.801863 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetMachineName
	I0729 14:37:52.802133 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetIP
	I0729 14:37:52.804526 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.804877 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.804910 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.805053 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:52.807140 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.807369 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.807395 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.807505 1039263 provision.go:143] copyHostCerts
	I0729 14:37:52.807594 1039263 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:37:52.807608 1039263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:37:52.807698 1039263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:37:52.807840 1039263 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:37:52.807852 1039263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:37:52.807891 1039263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:37:52.807969 1039263 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:37:52.807979 1039263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:37:52.808011 1039263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:37:52.808084 1039263 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.embed-certs-668123 san=[127.0.0.1 192.168.50.53 embed-certs-668123 localhost minikube]
	I0729 14:37:53.007382 1039263 provision.go:177] copyRemoteCerts
	I0729 14:37:53.007459 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:37:53.007548 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.010097 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.010465 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.010488 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.010660 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.010864 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.011037 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.011193 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:37:53.092043 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 14:37:53.116737 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:37:53.139828 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:37:53.162813 1039263 provision.go:87] duration metric: took 360.943219ms to configureAuth
	I0729 14:37:53.162856 1039263 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:37:53.163051 1039263 config.go:182] Loaded profile config "embed-certs-668123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:37:53.163144 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.165757 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.166102 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.166130 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.166272 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.166465 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.166665 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.166817 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.166983 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:53.167154 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:53.167169 1039263 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:37:53.428217 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:37:53.428246 1039263 machine.go:97] duration metric: took 967.84942ms to provisionDockerMachine
	I0729 14:37:53.428258 1039263 start.go:293] postStartSetup for "embed-certs-668123" (driver="kvm2")
	I0729 14:37:53.428269 1039263 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:37:53.428298 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.428641 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:37:53.428669 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.431228 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.431593 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.431620 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.431797 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.431992 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.432159 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.432313 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:37:53.511226 1039263 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:37:53.515527 1039263 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:37:53.515555 1039263 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:37:53.515635 1039263 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:37:53.515724 1039263 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:37:53.515846 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:37:53.525606 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:37:53.548757 1039263 start.go:296] duration metric: took 120.484005ms for postStartSetup
	I0729 14:37:53.548798 1039263 fix.go:56] duration metric: took 19.371497305s for fixHost
	I0729 14:37:53.548827 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.551373 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.551697 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.551725 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.551866 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.552085 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.552226 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.552383 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.552574 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:53.552746 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:53.552756 1039263 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:37:53.653267 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263873.628230451
	
	I0729 14:37:53.653291 1039263 fix.go:216] guest clock: 1722263873.628230451
	I0729 14:37:53.653301 1039263 fix.go:229] Guest: 2024-07-29 14:37:53.628230451 +0000 UTC Remote: 2024-07-29 14:37:53.548802078 +0000 UTC m=+242.399919494 (delta=79.428373ms)
	I0729 14:37:53.653329 1039263 fix.go:200] guest clock delta is within tolerance: 79.428373ms
	I0729 14:37:53.653337 1039263 start.go:83] releasing machines lock for "embed-certs-668123", held for 19.476079428s
	I0729 14:37:53.653364 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.653673 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetIP
	I0729 14:37:53.656383 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.656805 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.656836 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.656958 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.657597 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.657831 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.657923 1039263 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:37:53.657981 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.658101 1039263 ssh_runner.go:195] Run: cat /version.json
	I0729 14:37:53.658129 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.660964 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.661044 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.661349 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.661374 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.661400 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.661446 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.661628 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.661711 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.661795 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.661918 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.662012 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.662092 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.662200 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:37:53.662234 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:37:53.764286 1039263 ssh_runner.go:195] Run: systemctl --version
	I0729 14:37:53.772936 1039263 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:37:53.922874 1039263 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:37:53.928953 1039263 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:37:53.929035 1039263 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:37:53.947388 1039263 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:37:53.947417 1039263 start.go:495] detecting cgroup driver to use...
	I0729 14:37:53.947496 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:37:53.964141 1039263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:37:53.985980 1039263 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:37:53.986042 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:37:54.009646 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:37:54.023449 1039263 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:37:54.139511 1039263 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:37:54.312559 1039263 docker.go:233] disabling docker service ...
	I0729 14:37:54.312655 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:37:54.327466 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:37:54.342225 1039263 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:37:54.485007 1039263 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:37:54.623987 1039263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:37:54.638100 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:37:54.658833 1039263 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 14:37:54.658911 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.670274 1039263 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:37:54.670366 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.681548 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.691626 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.701915 1039263 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:37:54.713399 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.723631 1039263 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.740625 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.751521 1039263 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:37:54.761895 1039263 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:37:54.761942 1039263 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:37:54.775663 1039263 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:37:54.785415 1039263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:37:54.933441 1039263 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:37:55.066449 1039263 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:37:55.066539 1039263 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:37:55.071614 1039263 start.go:563] Will wait 60s for crictl version
	I0729 14:37:55.071671 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:37:55.075727 1039263 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:37:55.117286 1039263 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:37:55.117395 1039263 ssh_runner.go:195] Run: crio --version
	I0729 14:37:55.145732 1039263 ssh_runner.go:195] Run: crio --version
	I0729 14:37:55.179714 1039263 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 14:37:55.181109 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetIP
	I0729 14:37:55.184274 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:55.184734 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:55.184761 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:55.185066 1039263 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 14:37:55.190374 1039263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:37:55.206768 1039263 kubeadm.go:883] updating cluster {Name:embed-certs-668123 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-668123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:37:55.207054 1039263 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 14:37:55.207130 1039263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:37:55.247814 1039263 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 14:37:55.247890 1039263 ssh_runner.go:195] Run: which lz4
	I0729 14:37:55.251992 1039263 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 14:37:55.256440 1039263 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 14:37:55.256468 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 14:37:53.678402 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Start
	I0729 14:37:53.678610 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Ensuring networks are active...
	I0729 14:37:53.679311 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Ensuring network default is active
	I0729 14:37:53.679767 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Ensuring network mk-default-k8s-diff-port-751306 is active
	I0729 14:37:53.680133 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Getting domain xml...
	I0729 14:37:53.680818 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Creating domain...
	I0729 14:37:54.024601 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting to get IP...
	I0729 14:37:54.025431 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.025838 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.025944 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:54.025837 1040438 retry.go:31] will retry after 280.254814ms: waiting for machine to come up
	I0729 14:37:54.307727 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.308260 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.308293 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:54.308220 1040438 retry.go:31] will retry after 384.348242ms: waiting for machine to come up
	I0729 14:37:54.693703 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.694304 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.694334 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:54.694251 1040438 retry.go:31] will retry after 417.76448ms: waiting for machine to come up
	I0729 14:37:55.113670 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:55.114243 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:55.114272 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:55.114191 1040438 retry.go:31] will retry after 589.741485ms: waiting for machine to come up
	I0729 14:37:55.706127 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:55.706613 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:55.706646 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:55.706569 1040438 retry.go:31] will retry after 471.427821ms: waiting for machine to come up
	I0729 14:37:56.179380 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:56.179867 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:56.179896 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:56.179814 1040438 retry.go:31] will retry after 624.275074ms: waiting for machine to come up
	I0729 14:37:56.805673 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:56.806042 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:56.806063 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:56.806018 1040438 retry.go:31] will retry after 1.027377333s: waiting for machine to come up
	I0729 14:37:56.743842 1039263 crio.go:462] duration metric: took 1.49188656s to copy over tarball
	I0729 14:37:56.743941 1039263 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 14:37:58.879573 1039263 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.135595087s)
	I0729 14:37:58.879619 1039263 crio.go:469] duration metric: took 2.135735155s to extract the tarball
	I0729 14:37:58.879628 1039263 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 14:37:58.916966 1039263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:37:58.958323 1039263 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 14:37:58.958349 1039263 cache_images.go:84] Images are preloaded, skipping loading
	I0729 14:37:58.958357 1039263 kubeadm.go:934] updating node { 192.168.50.53 8443 v1.30.3 crio true true} ...
	I0729 14:37:58.958537 1039263 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-668123 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-668123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:37:58.958632 1039263 ssh_runner.go:195] Run: crio config
	I0729 14:37:59.004120 1039263 cni.go:84] Creating CNI manager for ""
	I0729 14:37:59.004146 1039263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:37:59.004163 1039263 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:37:59.004192 1039263 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.53 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-668123 NodeName:embed-certs-668123 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 14:37:59.004371 1039263 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-668123"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:37:59.004469 1039263 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 14:37:59.014796 1039263 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:37:59.014866 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:37:59.024575 1039263 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0729 14:37:59.040707 1039263 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 14:37:59.056693 1039263 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0729 14:37:59.073320 1039263 ssh_runner.go:195] Run: grep 192.168.50.53	control-plane.minikube.internal$ /etc/hosts
	I0729 14:37:59.077226 1039263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:37:59.091283 1039263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:37:59.221532 1039263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:37:59.239319 1039263 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123 for IP: 192.168.50.53
	I0729 14:37:59.239362 1039263 certs.go:194] generating shared ca certs ...
	I0729 14:37:59.239387 1039263 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:37:59.239604 1039263 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:37:59.239654 1039263 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:37:59.239667 1039263 certs.go:256] generating profile certs ...
	I0729 14:37:59.239818 1039263 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/client.key
	I0729 14:37:59.239922 1039263 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/apiserver.key.544998fe
	I0729 14:37:59.239969 1039263 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/proxy-client.key
	I0729 14:37:59.240137 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:37:59.240188 1039263 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:37:59.240202 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:37:59.240238 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:37:59.240280 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:37:59.240313 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:37:59.240385 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:37:59.241551 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:37:59.278842 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:37:59.305668 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:37:59.332314 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:37:59.377867 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 14:37:59.405592 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 14:37:59.438073 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:37:59.462130 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 14:37:59.489158 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:37:59.511811 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:37:59.534728 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:37:59.558680 1039263 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:37:59.575404 1039263 ssh_runner.go:195] Run: openssl version
	I0729 14:37:59.581518 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:37:59.592024 1039263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:37:59.596913 1039263 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:37:59.596983 1039263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:37:59.602973 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:37:59.613891 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:37:59.624053 1039263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:37:59.628881 1039263 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:37:59.628922 1039263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:37:59.634672 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:37:59.645513 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:37:59.656385 1039263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:37:59.661141 1039263 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:37:59.661209 1039263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:37:59.667169 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:37:59.678240 1039263 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:37:59.683075 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:37:59.689013 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:37:59.694754 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:37:59.700865 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:37:59.706664 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:37:59.712457 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:37:59.718347 1039263 kubeadm.go:392] StartCluster: {Name:embed-certs-668123 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-668123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:37:59.718460 1039263 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:37:59.718505 1039263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:37:59.756046 1039263 cri.go:89] found id: ""
	I0729 14:37:59.756143 1039263 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:37:59.766198 1039263 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 14:37:59.766222 1039263 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 14:37:59.766278 1039263 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 14:37:59.775664 1039263 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:37:59.776877 1039263 kubeconfig.go:125] found "embed-certs-668123" server: "https://192.168.50.53:8443"
	I0729 14:37:59.778802 1039263 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 14:37:59.787805 1039263 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.53
	I0729 14:37:59.787840 1039263 kubeadm.go:1160] stopping kube-system containers ...
	I0729 14:37:59.787854 1039263 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 14:37:59.787908 1039263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:37:59.828927 1039263 cri.go:89] found id: ""
	I0729 14:37:59.829016 1039263 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 14:37:59.844889 1039263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:37:59.854233 1039263 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:37:59.854264 1039263 kubeadm.go:157] found existing configuration files:
	
	I0729 14:37:59.854334 1039263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:37:59.863123 1039263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:37:59.863183 1039263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:37:59.872154 1039263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:37:59.880819 1039263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:37:59.880881 1039263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:37:59.889714 1039263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:37:59.898278 1039263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:37:59.898330 1039263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:37:59.907358 1039263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:37:59.916352 1039263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:37:59.916430 1039263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:37:59.925239 1039263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:37:59.934353 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:00.045086 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:00.793783 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:01.009839 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:01.080217 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:01.153377 1039263 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:38:01.153496 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:37:57.835202 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:57.835636 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:57.835674 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:57.835572 1040438 retry.go:31] will retry after 987.763901ms: waiting for machine to come up
	I0729 14:37:58.824975 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:58.825428 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:58.825457 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:58.825348 1040438 retry.go:31] will retry after 1.189429393s: waiting for machine to come up
	I0729 14:38:00.016130 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:00.016569 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:38:00.016604 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:38:00.016509 1040438 retry.go:31] will retry after 1.424039091s: waiting for machine to come up
	I0729 14:38:01.443138 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:01.443511 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:38:01.443540 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:38:01.443470 1040438 retry.go:31] will retry after 2.531090823s: waiting for machine to come up
	I0729 14:38:01.653905 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:02.153772 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:02.653590 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:02.669429 1039263 api_server.go:72] duration metric: took 1.516051254s to wait for apiserver process to appear ...
	I0729 14:38:02.669467 1039263 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:38:02.669495 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:05.531413 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:38:05.531451 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:38:05.531467 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:05.602173 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:38:05.602205 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:38:05.670522 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:05.680835 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:05.680861 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:06.170512 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:06.176052 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:06.176084 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:06.669679 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:06.674813 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:06.674854 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:07.170539 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:07.174573 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 200:
	ok
	I0729 14:38:07.180250 1039263 api_server.go:141] control plane version: v1.30.3
	I0729 14:38:07.180275 1039263 api_server.go:131] duration metric: took 4.510799806s to wait for apiserver health ...
	I0729 14:38:07.180284 1039263 cni.go:84] Creating CNI manager for ""
	I0729 14:38:07.180290 1039263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:38:07.181866 1039263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:38:03.976004 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:03.976514 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:38:03.976544 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:38:03.976474 1040438 retry.go:31] will retry after 3.356304099s: waiting for machine to come up
	I0729 14:38:07.335600 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:07.336031 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:38:07.336086 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:38:07.335992 1040438 retry.go:31] will retry after 3.345416128s: waiting for machine to come up
	I0729 14:38:07.182966 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:38:07.193166 1039263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:38:07.212801 1039263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:38:07.221297 1039263 system_pods.go:59] 8 kube-system pods found
	I0729 14:38:07.221331 1039263 system_pods.go:61] "coredns-7db6d8ff4d-6dhzz" [c680e565-fe93-4072-8fe8-6fd440ae5675] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 14:38:07.221340 1039263 system_pods.go:61] "etcd-embed-certs-668123" [3244d6a8-3aa2-406a-86fe-9770f5b8541a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 14:38:07.221347 1039263 system_pods.go:61] "kube-apiserver-embed-certs-668123" [a00570e4-b496-4083-b280-4125643e475e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 14:38:07.221352 1039263 system_pods.go:61] "kube-controller-manager-embed-certs-668123" [cec685e1-4d5f-4210-a115-e3766c962f07] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 14:38:07.221364 1039263 system_pods.go:61] "kube-proxy-2v79q" [e43e850d-b94e-467c-bf0f-0eac3828f54f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 14:38:07.221370 1039263 system_pods.go:61] "kube-scheduler-embed-certs-668123" [4037d948-faed-49c9-b321-6a4be51b9ea9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 14:38:07.221379 1039263 system_pods.go:61] "metrics-server-569cc877fc-5msnp" [eb9cd6f7-caf5-4b18-b0d6-0f01add839ce] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:38:07.221384 1039263 system_pods.go:61] "storage-provisioner" [ecdab0df-406c-4f3c-b8fe-34a48b7f1e0a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 14:38:07.221390 1039263 system_pods.go:74] duration metric: took 8.574498ms to wait for pod list to return data ...
	I0729 14:38:07.221397 1039263 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:38:07.224197 1039263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:38:07.224220 1039263 node_conditions.go:123] node cpu capacity is 2
	I0729 14:38:07.224231 1039263 node_conditions.go:105] duration metric: took 2.829585ms to run NodePressure ...
	I0729 14:38:07.224246 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:07.520049 1039263 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 14:38:07.524228 1039263 kubeadm.go:739] kubelet initialised
	I0729 14:38:07.524251 1039263 kubeadm.go:740] duration metric: took 4.174563ms waiting for restarted kubelet to initialise ...
	I0729 14:38:07.524262 1039263 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:38:07.529174 1039263 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:07.533534 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.533554 1039263 pod_ready.go:81] duration metric: took 4.355926ms for pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:07.533562 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.533567 1039263 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:07.537529 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "etcd-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.537550 1039263 pod_ready.go:81] duration metric: took 3.975082ms for pod "etcd-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:07.537561 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "etcd-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.537567 1039263 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:07.542299 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.542325 1039263 pod_ready.go:81] duration metric: took 4.747863ms for pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:07.542371 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.542390 1039263 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:07.616688 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.616725 1039263 pod_ready.go:81] duration metric: took 74.323327ms for pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:07.616740 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.616750 1039263 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2v79q" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:08.016334 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "kube-proxy-2v79q" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.016360 1039263 pod_ready.go:81] duration metric: took 399.599984ms for pod "kube-proxy-2v79q" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:08.016369 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "kube-proxy-2v79q" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.016374 1039263 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:08.416536 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.416571 1039263 pod_ready.go:81] duration metric: took 400.189243ms for pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:08.416585 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.416594 1039263 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:08.817526 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.817561 1039263 pod_ready.go:81] duration metric: took 400.956263ms for pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:08.817572 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.817590 1039263 pod_ready.go:38] duration metric: took 1.293313082s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:38:08.817610 1039263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 14:38:08.829394 1039263 ops.go:34] apiserver oom_adj: -16
	I0729 14:38:08.829425 1039263 kubeadm.go:597] duration metric: took 9.06319609s to restartPrimaryControlPlane
	I0729 14:38:08.829436 1039263 kubeadm.go:394] duration metric: took 9.111098315s to StartCluster
	I0729 14:38:08.829457 1039263 settings.go:142] acquiring lock: {Name:mke61e73d7bb1a5bd9c2f4c9e9bba0a07b199ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:08.829548 1039263 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:38:08.831113 1039263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:08.831396 1039263 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:38:08.831441 1039263 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 14:38:08.831524 1039263 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-668123"
	I0729 14:38:08.831539 1039263 addons.go:69] Setting default-storageclass=true in profile "embed-certs-668123"
	I0729 14:38:08.831562 1039263 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-668123"
	W0729 14:38:08.831572 1039263 addons.go:243] addon storage-provisioner should already be in state true
	I0729 14:38:08.831561 1039263 addons.go:69] Setting metrics-server=true in profile "embed-certs-668123"
	I0729 14:38:08.831593 1039263 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-668123"
	I0729 14:38:08.831601 1039263 addons.go:234] Setting addon metrics-server=true in "embed-certs-668123"
	I0729 14:38:08.831609 1039263 host.go:66] Checking if "embed-certs-668123" exists ...
	W0729 14:38:08.831610 1039263 addons.go:243] addon metrics-server should already be in state true
	I0729 14:38:08.831632 1039263 config.go:182] Loaded profile config "embed-certs-668123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:38:08.831644 1039263 host.go:66] Checking if "embed-certs-668123" exists ...
	I0729 14:38:08.831916 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.831933 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.831918 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.831956 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.831945 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.831964 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.833223 1039263 out.go:177] * Verifying Kubernetes components...
	I0729 14:38:08.834403 1039263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:08.847231 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I0729 14:38:08.847362 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37467
	I0729 14:38:08.847398 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44737
	I0729 14:38:08.847797 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.847896 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.847904 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.848350 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.848371 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.848487 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.848507 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.848520 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.848540 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.848774 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.848854 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.848867 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.849010 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:38:08.849363 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.849363 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.849392 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.849416 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.851933 1039263 addons.go:234] Setting addon default-storageclass=true in "embed-certs-668123"
	W0729 14:38:08.851955 1039263 addons.go:243] addon default-storageclass should already be in state true
	I0729 14:38:08.851988 1039263 host.go:66] Checking if "embed-certs-668123" exists ...
	I0729 14:38:08.852284 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.852330 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.865255 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34389
	I0729 14:38:08.865707 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.865981 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36925
	I0729 14:38:08.866157 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.866183 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.866419 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.866531 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.866804 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:38:08.866895 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.866920 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.867272 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.867839 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.867885 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.868000 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46413
	I0729 14:38:08.868397 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.868741 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:38:08.868886 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.868903 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.869276 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.869501 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:38:08.870835 1039263 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 14:38:08.871289 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:38:08.872267 1039263 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 14:38:08.872289 1039263 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 14:38:08.872306 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:38:08.873165 1039263 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:08.874593 1039263 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:38:08.874616 1039263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 14:38:08.874635 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:38:08.875061 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.875572 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:38:08.875605 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.875815 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:38:08.876012 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:38:08.876208 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:38:08.876370 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:38:08.877997 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.878394 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:38:08.878423 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.878555 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:38:08.878726 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:38:08.878889 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:38:08.879002 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:38:08.890720 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I0729 14:38:08.891092 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.891619 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.891638 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.891972 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.892184 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:38:08.893577 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:38:08.893817 1039263 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 14:38:08.893840 1039263 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 14:38:08.893859 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:38:08.896843 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.897302 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:38:08.897320 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.897464 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:38:08.897618 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:38:08.897866 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:38:08.897966 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:38:09.019001 1039263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:38:09.038038 1039263 node_ready.go:35] waiting up to 6m0s for node "embed-certs-668123" to be "Ready" ...
	I0729 14:38:09.097896 1039263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:38:09.101844 1039263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 14:38:09.229339 1039263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 14:38:09.229360 1039263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 14:38:09.317591 1039263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 14:38:09.317625 1039263 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 14:38:09.370444 1039263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:38:09.370490 1039263 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 14:38:09.407869 1039263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:38:10.014739 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.014767 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.014873 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.014897 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.015112 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Closing plugin on server side
	I0729 14:38:10.015150 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.015157 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.015166 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.015174 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.015284 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.015297 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.015306 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.015313 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.015384 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.015413 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.015395 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Closing plugin on server side
	I0729 14:38:10.015611 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.015641 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.024010 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.024031 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.024299 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.024318 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.024343 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Closing plugin on server side
	I0729 14:38:10.233873 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.233903 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.234247 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Closing plugin on server side
	I0729 14:38:10.234260 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.234275 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.234292 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.234301 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.234546 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.234563 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.234574 1039263 addons.go:475] Verifying addon metrics-server=true in "embed-certs-668123"
	I0729 14:38:10.236215 1039263 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 14:38:10.237377 1039263 addons.go:510] duration metric: took 1.405942032s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 14:38:11.042263 1039263 node_ready.go:53] node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:12.129080 1039759 start.go:364] duration metric: took 3m18.14725367s to acquireMachinesLock for "old-k8s-version-360866"
	I0729 14:38:12.129155 1039759 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:38:12.129166 1039759 fix.go:54] fixHost starting: 
	I0729 14:38:12.129715 1039759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:12.129752 1039759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:12.146596 1039759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34517
	I0729 14:38:12.147101 1039759 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:12.147554 1039759 main.go:141] libmachine: Using API Version  1
	I0729 14:38:12.147581 1039759 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:12.147871 1039759 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:12.148094 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:12.148293 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetState
	I0729 14:38:12.149880 1039759 fix.go:112] recreateIfNeeded on old-k8s-version-360866: state=Stopped err=<nil>
	I0729 14:38:12.149918 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	W0729 14:38:12.150120 1039759 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:38:12.152003 1039759 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-360866" ...
	I0729 14:38:10.683699 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.684108 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Found IP for machine: 192.168.72.233
	I0729 14:38:10.684148 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has current primary IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.684161 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Reserving static IP address...
	I0729 14:38:10.684506 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-751306", mac: "52:54:00:9f:b9:23", ip: "192.168.72.233"} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.684540 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | skip adding static IP to network mk-default-k8s-diff-port-751306 - found existing host DHCP lease matching {name: "default-k8s-diff-port-751306", mac: "52:54:00:9f:b9:23", ip: "192.168.72.233"}
	I0729 14:38:10.684558 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Reserved static IP address: 192.168.72.233
	I0729 14:38:10.684581 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for SSH to be available...
	I0729 14:38:10.684600 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Getting to WaitForSSH function...
	I0729 14:38:10.686336 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.686684 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.686713 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.686825 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Using SSH client type: external
	I0729 14:38:10.686856 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa (-rw-------)
	I0729 14:38:10.686894 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.233 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:38:10.686904 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | About to run SSH command:
	I0729 14:38:10.686921 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | exit 0
	I0729 14:38:10.808536 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | SSH cmd err, output: <nil>: 
	I0729 14:38:10.808965 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetConfigRaw
	I0729 14:38:10.809613 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetIP
	I0729 14:38:10.812200 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.812590 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.812625 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.812862 1039440 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/config.json ...
	I0729 14:38:10.813089 1039440 machine.go:94] provisionDockerMachine start ...
	I0729 14:38:10.813110 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:10.813322 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:10.815607 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.815933 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.815962 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.816113 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:10.816287 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:10.816450 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:10.816623 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:10.816838 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:10.817167 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:10.817184 1039440 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:38:10.916864 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 14:38:10.916908 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetMachineName
	I0729 14:38:10.917215 1039440 buildroot.go:166] provisioning hostname "default-k8s-diff-port-751306"
	I0729 14:38:10.917249 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetMachineName
	I0729 14:38:10.917478 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:10.919961 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.920339 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.920363 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.920471 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:10.920660 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:10.920842 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:10.920991 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:10.921145 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:10.921358 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:10.921377 1039440 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-751306 && echo "default-k8s-diff-port-751306" | sudo tee /etc/hostname
	I0729 14:38:11.034826 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-751306
	
	I0729 14:38:11.034859 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.037494 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.037836 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.037870 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.038068 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:11.038274 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.038410 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.038575 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:11.038736 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:11.038971 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:11.038998 1039440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-751306' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-751306/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-751306' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:38:11.146350 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:38:11.146391 1039440 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:38:11.146449 1039440 buildroot.go:174] setting up certificates
	I0729 14:38:11.146463 1039440 provision.go:84] configureAuth start
	I0729 14:38:11.146478 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetMachineName
	I0729 14:38:11.146842 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetIP
	I0729 14:38:11.149492 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.149766 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.149796 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.149927 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.152449 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.152735 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.152785 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.152956 1039440 provision.go:143] copyHostCerts
	I0729 14:38:11.153010 1039440 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:38:11.153021 1039440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:38:11.153074 1039440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:38:11.153172 1039440 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:38:11.153180 1039440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:38:11.153198 1039440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:38:11.153253 1039440 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:38:11.153260 1039440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:38:11.153276 1039440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:38:11.153324 1039440 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-751306 san=[127.0.0.1 192.168.72.233 default-k8s-diff-port-751306 localhost minikube]
	I0729 14:38:11.489907 1039440 provision.go:177] copyRemoteCerts
	I0729 14:38:11.489990 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:38:11.490028 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.492487 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.492801 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.492832 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.492992 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:11.493220 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.493467 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:11.493611 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:38:11.574475 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:38:11.598182 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:38:11.622809 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 14:38:11.646533 1039440 provision.go:87] duration metric: took 500.054139ms to configureAuth
	I0729 14:38:11.646563 1039440 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:38:11.646742 1039440 config.go:182] Loaded profile config "default-k8s-diff-port-751306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:38:11.646822 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.649260 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.649581 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.649616 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.649729 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:11.649934 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.650088 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.650274 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:11.650436 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:11.650610 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:11.650628 1039440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:38:11.906877 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:38:11.906918 1039440 machine.go:97] duration metric: took 1.093811728s to provisionDockerMachine
	I0729 14:38:11.906936 1039440 start.go:293] postStartSetup for "default-k8s-diff-port-751306" (driver="kvm2")
	I0729 14:38:11.906951 1039440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:38:11.906977 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:11.907366 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:38:11.907407 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.910366 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.910725 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.910748 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.910913 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:11.911162 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.911323 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:11.911529 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:38:11.992133 1039440 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:38:11.996426 1039440 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:38:11.996456 1039440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:38:11.996544 1039440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:38:11.996641 1039440 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:38:11.996747 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:38:12.006165 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:12.029591 1039440 start.go:296] duration metric: took 122.613174ms for postStartSetup
	I0729 14:38:12.029643 1039440 fix.go:56] duration metric: took 18.376148792s for fixHost
	I0729 14:38:12.029670 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:12.032299 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.032667 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:12.032731 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.032901 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:12.033104 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:12.033260 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:12.033372 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:12.033510 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:12.033679 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:12.033688 1039440 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:38:12.128889 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263892.107886376
	
	I0729 14:38:12.128917 1039440 fix.go:216] guest clock: 1722263892.107886376
	I0729 14:38:12.128926 1039440 fix.go:229] Guest: 2024-07-29 14:38:12.107886376 +0000 UTC Remote: 2024-07-29 14:38:12.029648961 +0000 UTC m=+239.632909800 (delta=78.237415ms)
	I0729 14:38:12.128955 1039440 fix.go:200] guest clock delta is within tolerance: 78.237415ms
	I0729 14:38:12.128961 1039440 start.go:83] releasing machines lock for "default-k8s-diff-port-751306", held for 18.475504041s
	I0729 14:38:12.128995 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:12.129301 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetIP
	I0729 14:38:12.132025 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.132374 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:12.132401 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.132566 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:12.133087 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:12.133273 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:12.133349 1039440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:38:12.133404 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:12.133513 1039440 ssh_runner.go:195] Run: cat /version.json
	I0729 14:38:12.133534 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:12.136121 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.136149 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.136523 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:12.136577 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:12.136607 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.136624 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.136716 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:12.136793 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:12.136917 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:12.136973 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:12.137088 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:12.137165 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:12.137292 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:38:12.137232 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:38:12.233842 1039440 ssh_runner.go:195] Run: systemctl --version
	I0729 14:38:12.240082 1039440 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:38:12.388404 1039440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:38:12.395038 1039440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:38:12.395127 1039440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:38:12.416590 1039440 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:38:12.416618 1039440 start.go:495] detecting cgroup driver to use...
	I0729 14:38:12.416682 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:38:12.437863 1039440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:38:12.453458 1039440 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:38:12.453508 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:38:12.467657 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:38:12.482328 1039440 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:38:12.610786 1039440 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:38:12.774787 1039440 docker.go:233] disabling docker service ...
	I0729 14:38:12.774861 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:38:12.790091 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:38:12.803914 1039440 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:38:12.933894 1039440 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:38:13.052159 1039440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:38:13.069309 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:38:13.089959 1039440 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 14:38:13.090014 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.102668 1039440 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:38:13.102741 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.113634 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.124374 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.135488 1039440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:38:13.147171 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.159757 1039440 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.178620 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.189326 1039440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:38:13.200007 1039440 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:38:13.200067 1039440 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:38:13.213063 1039440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:38:13.226044 1039440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:13.360685 1039440 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:38:13.508473 1039440 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:38:13.508556 1039440 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:38:13.513547 1039440 start.go:563] Will wait 60s for crictl version
	I0729 14:38:13.513619 1039440 ssh_runner.go:195] Run: which crictl
	I0729 14:38:13.518528 1039440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:38:13.567103 1039440 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:38:13.567180 1039440 ssh_runner.go:195] Run: crio --version
	I0729 14:38:13.603837 1039440 ssh_runner.go:195] Run: crio --version
	I0729 14:38:13.633583 1039440 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 14:38:12.153214 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .Start
	I0729 14:38:12.153408 1039759 main.go:141] libmachine: (old-k8s-version-360866) Ensuring networks are active...
	I0729 14:38:12.154141 1039759 main.go:141] libmachine: (old-k8s-version-360866) Ensuring network default is active
	I0729 14:38:12.154590 1039759 main.go:141] libmachine: (old-k8s-version-360866) Ensuring network mk-old-k8s-version-360866 is active
	I0729 14:38:12.154970 1039759 main.go:141] libmachine: (old-k8s-version-360866) Getting domain xml...
	I0729 14:38:12.155733 1039759 main.go:141] libmachine: (old-k8s-version-360866) Creating domain...
	I0729 14:38:12.526504 1039759 main.go:141] libmachine: (old-k8s-version-360866) Waiting to get IP...
	I0729 14:38:12.527560 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:12.528068 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:12.528147 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:12.528048 1040622 retry.go:31] will retry after 240.079974ms: waiting for machine to come up
	I0729 14:38:12.769388 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:12.769881 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:12.769910 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:12.769829 1040622 retry.go:31] will retry after 271.200632ms: waiting for machine to come up
	I0729 14:38:13.042584 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:13.043069 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:13.043101 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:13.043049 1040622 retry.go:31] will retry after 464.725959ms: waiting for machine to come up
	I0729 14:38:13.509830 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:13.510400 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:13.510434 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:13.510350 1040622 retry.go:31] will retry after 416.316047ms: waiting for machine to come up
	I0729 14:38:13.042877 1039263 node_ready.go:53] node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:15.051282 1039263 node_ready.go:53] node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:13.635092 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetIP
	I0729 14:38:13.638202 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:13.638665 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:13.638691 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:13.638933 1039440 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 14:38:13.642960 1039440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:13.656098 1039440 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-751306 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-751306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.233 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:38:13.656208 1039440 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 14:38:13.656255 1039440 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:13.697398 1039440 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 14:38:13.697475 1039440 ssh_runner.go:195] Run: which lz4
	I0729 14:38:13.701632 1039440 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 14:38:13.707129 1039440 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 14:38:13.707162 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 14:38:15.218414 1039440 crio.go:462] duration metric: took 1.516807674s to copy over tarball
	I0729 14:38:15.218505 1039440 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 14:38:13.927885 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:13.928343 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:13.928373 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:13.928307 1040622 retry.go:31] will retry after 659.670364ms: waiting for machine to come up
	I0729 14:38:14.589644 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:14.590143 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:14.590172 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:14.590031 1040622 retry.go:31] will retry after 738.020335ms: waiting for machine to come up
	I0729 14:38:15.330093 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:15.330603 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:15.330633 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:15.330553 1040622 retry.go:31] will retry after 1.13067902s: waiting for machine to come up
	I0729 14:38:16.462554 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:16.463002 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:16.463031 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:16.462977 1040622 retry.go:31] will retry after 1.342785853s: waiting for machine to come up
	I0729 14:38:17.806889 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:17.807333 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:17.807365 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:17.807266 1040622 retry.go:31] will retry after 1.804812934s: waiting for machine to come up
	I0729 14:38:16.550848 1039263 node_ready.go:49] node "embed-certs-668123" has status "Ready":"True"
	I0729 14:38:16.550880 1039263 node_ready.go:38] duration metric: took 7.512808712s for node "embed-certs-668123" to be "Ready" ...
	I0729 14:38:16.550895 1039263 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:38:16.563220 1039263 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:16.570054 1039263 pod_ready.go:92] pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:16.570080 1039263 pod_ready.go:81] duration metric: took 6.832939ms for pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:16.570091 1039263 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:19.207981 1039263 pod_ready.go:102] pod "etcd-embed-certs-668123" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:17.498961 1039440 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.280415291s)
	I0729 14:38:17.498997 1039440 crio.go:469] duration metric: took 2.280548689s to extract the tarball
	I0729 14:38:17.499008 1039440 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 14:38:17.537972 1039440 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:17.583582 1039440 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 14:38:17.583609 1039440 cache_images.go:84] Images are preloaded, skipping loading
	I0729 14:38:17.583617 1039440 kubeadm.go:934] updating node { 192.168.72.233 8444 v1.30.3 crio true true} ...
	I0729 14:38:17.583719 1039440 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-751306 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-751306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:38:17.583789 1039440 ssh_runner.go:195] Run: crio config
	I0729 14:38:17.637202 1039440 cni.go:84] Creating CNI manager for ""
	I0729 14:38:17.637230 1039440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:38:17.637243 1039440 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:38:17.637272 1039440 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.233 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-751306 NodeName:default-k8s-diff-port-751306 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.233 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 14:38:17.637451 1039440 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.233
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-751306"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.233
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.233"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:38:17.637528 1039440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 14:38:17.650173 1039440 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:38:17.650259 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:38:17.661790 1039440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0729 14:38:17.680720 1039440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 14:38:17.700420 1039440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0729 14:38:17.723134 1039440 ssh_runner.go:195] Run: grep 192.168.72.233	control-plane.minikube.internal$ /etc/hosts
	I0729 14:38:17.727510 1039440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.233	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:17.741033 1039440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:17.889833 1039440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:38:17.910486 1039440 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306 for IP: 192.168.72.233
	I0729 14:38:17.910540 1039440 certs.go:194] generating shared ca certs ...
	I0729 14:38:17.910565 1039440 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:17.910763 1039440 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:38:17.910821 1039440 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:38:17.910833 1039440 certs.go:256] generating profile certs ...
	I0729 14:38:17.910941 1039440 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/client.key
	I0729 14:38:17.911003 1039440 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/apiserver.key.811a3f6d
	I0729 14:38:17.911105 1039440 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/proxy-client.key
	I0729 14:38:17.911271 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:38:17.911315 1039440 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:38:17.911329 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:38:17.911362 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:38:17.911393 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:38:17.911426 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:38:17.911478 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:17.912301 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:38:17.948102 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:38:17.984122 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:38:18.019932 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:38:18.062310 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 14:38:18.093176 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 14:38:18.124016 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:38:18.151933 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 14:38:18.179714 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:38:18.203414 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:38:18.233286 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:38:18.262871 1039440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:38:18.283064 1039440 ssh_runner.go:195] Run: openssl version
	I0729 14:38:18.289016 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:38:18.299409 1039440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:38:18.304053 1039440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:38:18.304115 1039440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:38:18.309976 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:38:18.321472 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:38:18.331916 1039440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:38:18.336822 1039440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:38:18.336881 1039440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:38:18.342762 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:38:18.353478 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:38:18.364299 1039440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:18.369024 1039440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:18.369076 1039440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:18.376534 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:38:18.387360 1039440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:38:18.392392 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:38:18.398520 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:38:18.404397 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:38:18.410922 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:38:18.417193 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:38:18.423808 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:38:18.433345 1039440 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-751306 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-751306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.233 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:38:18.433463 1039440 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:38:18.433582 1039440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:38:18.476749 1039440 cri.go:89] found id: ""
	I0729 14:38:18.476834 1039440 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:38:18.488548 1039440 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 14:38:18.488570 1039440 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 14:38:18.488628 1039440 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 14:38:18.499081 1039440 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:38:18.500064 1039440 kubeconfig.go:125] found "default-k8s-diff-port-751306" server: "https://192.168.72.233:8444"
	I0729 14:38:18.502130 1039440 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 14:38:18.511589 1039440 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.233
	I0729 14:38:18.511631 1039440 kubeadm.go:1160] stopping kube-system containers ...
	I0729 14:38:18.511646 1039440 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 14:38:18.511698 1039440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:38:18.559691 1039440 cri.go:89] found id: ""
	I0729 14:38:18.559779 1039440 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 14:38:18.576217 1039440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:38:18.586031 1039440 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:38:18.586057 1039440 kubeadm.go:157] found existing configuration files:
	
	I0729 14:38:18.586110 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 14:38:18.595032 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:38:18.595096 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:38:18.604320 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 14:38:18.613996 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:38:18.614053 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:38:18.623345 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 14:38:18.631898 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:38:18.631943 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:38:18.641303 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 14:38:18.649849 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:38:18.649907 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:38:18.659657 1039440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:38:18.668914 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:18.782351 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:19.902413 1039440 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.120025721s)
	I0729 14:38:19.902451 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:20.120455 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:20.206099 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:20.293738 1039440 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:38:20.293850 1039440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:20.794840 1039440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:21.294958 1039440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:21.313567 1039440 api_server.go:72] duration metric: took 1.019826572s to wait for apiserver process to appear ...
	I0729 14:38:21.313600 1039440 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:38:21.313625 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:21.314152 1039440 api_server.go:269] stopped: https://192.168.72.233:8444/healthz: Get "https://192.168.72.233:8444/healthz": dial tcp 192.168.72.233:8444: connect: connection refused
	I0729 14:38:21.813935 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:19.613474 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:19.613801 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:19.613830 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:19.613749 1040622 retry.go:31] will retry after 1.449593132s: waiting for machine to come up
	I0729 14:38:21.064774 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:21.065382 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:21.065405 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:21.065314 1040622 retry.go:31] will retry after 1.807508073s: waiting for machine to come up
	I0729 14:38:22.874485 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:22.874896 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:22.874925 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:22.874844 1040622 retry.go:31] will retry after 3.036719557s: waiting for machine to come up
	I0729 14:38:21.578125 1039263 pod_ready.go:92] pod "etcd-embed-certs-668123" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.578152 1039263 pod_ready.go:81] duration metric: took 5.008051755s for pod "etcd-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.578164 1039263 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.584521 1039263 pod_ready.go:92] pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.584544 1039263 pod_ready.go:81] duration metric: took 6.372252ms for pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.584558 1039263 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.590245 1039263 pod_ready.go:92] pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.590269 1039263 pod_ready.go:81] duration metric: took 5.702853ms for pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.590280 1039263 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2v79q" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.594576 1039263 pod_ready.go:92] pod "kube-proxy-2v79q" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.594602 1039263 pod_ready.go:81] duration metric: took 4.314692ms for pod "kube-proxy-2v79q" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.594614 1039263 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.787339 1039263 pod_ready.go:92] pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.787379 1039263 pod_ready.go:81] duration metric: took 192.756548ms for pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.787399 1039263 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:23.795588 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:24.561135 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:38:24.561176 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:38:24.561195 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:24.635519 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:24.635550 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:24.813755 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:24.817972 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:24.818003 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:25.314643 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:25.320059 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:25.320094 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:25.814758 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:25.820578 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:25.820613 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:26.314798 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:26.319346 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:26.319384 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:26.813907 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:26.821176 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:26.821208 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:27.314614 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:27.319335 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:27.319361 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:27.814188 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:27.819010 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 200:
	ok
	I0729 14:38:27.826057 1039440 api_server.go:141] control plane version: v1.30.3
	I0729 14:38:27.826082 1039440 api_server.go:131] duration metric: took 6.512474877s to wait for apiserver health ...
	I0729 14:38:27.826091 1039440 cni.go:84] Creating CNI manager for ""
	I0729 14:38:27.826098 1039440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:38:27.827698 1039440 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:38:25.913642 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:25.914139 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:25.914166 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:25.914099 1040622 retry.go:31] will retry after 3.839238383s: waiting for machine to come up
	I0729 14:38:26.293618 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:28.294115 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:30.296010 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:31.361688 1038758 start.go:364] duration metric: took 52.182622805s to acquireMachinesLock for "no-preload-603534"
	I0729 14:38:31.361756 1038758 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:38:31.361765 1038758 fix.go:54] fixHost starting: 
	I0729 14:38:31.362279 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:31.362319 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:31.380259 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34959
	I0729 14:38:31.380783 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:31.381320 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:38:31.381349 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:31.381649 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:31.381848 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:31.381989 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:38:31.383537 1038758 fix.go:112] recreateIfNeeded on no-preload-603534: state=Stopped err=<nil>
	I0729 14:38:31.383561 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	W0729 14:38:31.383739 1038758 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:38:31.385496 1038758 out.go:177] * Restarting existing kvm2 VM for "no-preload-603534" ...
	I0729 14:38:31.386878 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Start
	I0729 14:38:31.387071 1038758 main.go:141] libmachine: (no-preload-603534) Ensuring networks are active...
	I0729 14:38:31.387821 1038758 main.go:141] libmachine: (no-preload-603534) Ensuring network default is active
	I0729 14:38:31.388141 1038758 main.go:141] libmachine: (no-preload-603534) Ensuring network mk-no-preload-603534 is active
	I0729 14:38:31.388649 1038758 main.go:141] libmachine: (no-preload-603534) Getting domain xml...
	I0729 14:38:31.391807 1038758 main.go:141] libmachine: (no-preload-603534) Creating domain...
	I0729 14:38:27.829109 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:38:27.839810 1039440 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:38:27.858724 1039440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:38:27.868075 1039440 system_pods.go:59] 8 kube-system pods found
	I0729 14:38:27.868112 1039440 system_pods.go:61] "coredns-7db6d8ff4d-m6dlw" [7ce45b48-f04d-4167-8a6e-643b2fb3c4f0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 14:38:27.868121 1039440 system_pods.go:61] "etcd-default-k8s-diff-port-751306" [7ccadfd7-8b68-45c0-9670-af97b90d35d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 14:38:27.868128 1039440 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-751306" [5e8c8e17-28db-499c-a940-e67d92b28bfd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 14:38:27.868134 1039440 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-751306" [a2d31d58-d8d9-4070-96af-0d1af763d0b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 14:38:27.868140 1039440 system_pods.go:61] "kube-proxy-p6dv5" [c44edf0a-f608-49f2-ab53-7ffbcdf13b5e] Running
	I0729 14:38:27.868146 1039440 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-751306" [b87ee044-f43f-4aa7-94b3-4f2ad2213ce9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 14:38:27.868152 1039440 system_pods.go:61] "metrics-server-569cc877fc-gmz64" [296e883c-7394-4004-a25f-e93b4be52d46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:38:27.868156 1039440 system_pods.go:61] "storage-provisioner" [ec3b78f1-96a3-47b2-958d-82258a074634] Running
	I0729 14:38:27.868165 1039440 system_pods.go:74] duration metric: took 9.405484ms to wait for pod list to return data ...
	I0729 14:38:27.868173 1039440 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:38:27.871538 1039440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:38:27.871563 1039440 node_conditions.go:123] node cpu capacity is 2
	I0729 14:38:27.871575 1039440 node_conditions.go:105] duration metric: took 3.397306ms to run NodePressure ...
	I0729 14:38:27.871596 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:28.143890 1039440 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 14:38:28.148855 1039440 kubeadm.go:739] kubelet initialised
	I0729 14:38:28.148880 1039440 kubeadm.go:740] duration metric: took 4.952057ms waiting for restarted kubelet to initialise ...
	I0729 14:38:28.148891 1039440 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:38:28.154636 1039440 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-m6dlw" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:30.161265 1039440 pod_ready.go:102] pod "coredns-7db6d8ff4d-m6dlw" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:31.161979 1039440 pod_ready.go:92] pod "coredns-7db6d8ff4d-m6dlw" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:31.162005 1039440 pod_ready.go:81] duration metric: took 3.007344998s for pod "coredns-7db6d8ff4d-m6dlw" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:31.162015 1039440 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:29.755060 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.755512 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has current primary IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.755524 1039759 main.go:141] libmachine: (old-k8s-version-360866) Found IP for machine: 192.168.39.71
	I0729 14:38:29.755536 1039759 main.go:141] libmachine: (old-k8s-version-360866) Reserving static IP address...
	I0729 14:38:29.755975 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "old-k8s-version-360866", mac: "52:54:00:18:de:25", ip: "192.168.39.71"} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.756008 1039759 main.go:141] libmachine: (old-k8s-version-360866) Reserved static IP address: 192.168.39.71
	I0729 14:38:29.756035 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | skip adding static IP to network mk-old-k8s-version-360866 - found existing host DHCP lease matching {name: "old-k8s-version-360866", mac: "52:54:00:18:de:25", ip: "192.168.39.71"}
	I0729 14:38:29.756048 1039759 main.go:141] libmachine: (old-k8s-version-360866) Waiting for SSH to be available...
	I0729 14:38:29.756067 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | Getting to WaitForSSH function...
	I0729 14:38:29.758527 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.758899 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.758944 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.759003 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | Using SSH client type: external
	I0729 14:38:29.759024 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa (-rw-------)
	I0729 14:38:29.759058 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.71 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:38:29.759070 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | About to run SSH command:
	I0729 14:38:29.759083 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | exit 0
	I0729 14:38:29.884425 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | SSH cmd err, output: <nil>: 
	I0729 14:38:29.884833 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetConfigRaw
	I0729 14:38:29.885450 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:29.887929 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.888241 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.888294 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.888624 1039759 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/config.json ...
	I0729 14:38:29.888895 1039759 machine.go:94] provisionDockerMachine start ...
	I0729 14:38:29.888919 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:29.889221 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:29.891654 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.892013 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.892038 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.892163 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:29.892350 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.892598 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.892764 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:29.892968 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:29.893158 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:29.893169 1039759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:38:29.993529 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 14:38:29.993564 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetMachineName
	I0729 14:38:29.993859 1039759 buildroot.go:166] provisioning hostname "old-k8s-version-360866"
	I0729 14:38:29.993893 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetMachineName
	I0729 14:38:29.994074 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:29.996882 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.997279 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.997308 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.997537 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:29.997699 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.997856 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.997976 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:29.998206 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:29.998412 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:29.998429 1039759 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-360866 && echo "old-k8s-version-360866" | sudo tee /etc/hostname
	I0729 14:38:30.115298 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-360866
	
	I0729 14:38:30.115331 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.118349 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.118763 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.118793 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.119029 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:30.119203 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.119356 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.119561 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:30.119772 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:30.119976 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:30.120019 1039759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-360866' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-360866/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-360866' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:38:30.229987 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:38:30.230017 1039759 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:38:30.230059 1039759 buildroot.go:174] setting up certificates
	I0729 14:38:30.230070 1039759 provision.go:84] configureAuth start
	I0729 14:38:30.230090 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetMachineName
	I0729 14:38:30.230436 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:30.233150 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.233501 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.233533 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.233719 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.236157 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.236494 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.236534 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.236713 1039759 provision.go:143] copyHostCerts
	I0729 14:38:30.236786 1039759 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:38:30.236797 1039759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:38:30.236856 1039759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:38:30.236976 1039759 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:38:30.236986 1039759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:38:30.237006 1039759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:38:30.237071 1039759 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:38:30.237078 1039759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:38:30.237095 1039759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:38:30.237153 1039759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-360866 san=[127.0.0.1 192.168.39.71 localhost minikube old-k8s-version-360866]
	I0729 14:38:30.680859 1039759 provision.go:177] copyRemoteCerts
	I0729 14:38:30.680933 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:38:30.680970 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.683890 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.684229 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.684262 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.684430 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:30.684634 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.684822 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:30.684973 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:30.770659 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:38:30.799011 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 14:38:30.825536 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:38:30.850751 1039759 provision.go:87] duration metric: took 620.664228ms to configureAuth
	I0729 14:38:30.850795 1039759 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:38:30.850998 1039759 config.go:182] Loaded profile config "old-k8s-version-360866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 14:38:30.851072 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.853735 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.854065 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.854102 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.854197 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:30.854408 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.854559 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.854717 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:30.854961 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:30.855169 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:30.855187 1039759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:38:31.119354 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:38:31.119386 1039759 machine.go:97] duration metric: took 1.230472142s to provisionDockerMachine
	I0729 14:38:31.119401 1039759 start.go:293] postStartSetup for "old-k8s-version-360866" (driver="kvm2")
	I0729 14:38:31.119415 1039759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:38:31.119456 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.119885 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:38:31.119926 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.123196 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.123576 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.123607 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.123826 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.124053 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.124276 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.124469 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:31.208607 1039759 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:38:31.213173 1039759 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:38:31.213206 1039759 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:38:31.213268 1039759 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:38:31.213352 1039759 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:38:31.213454 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:38:31.225256 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:31.253156 1039759 start.go:296] duration metric: took 133.735669ms for postStartSetup
	I0729 14:38:31.253208 1039759 fix.go:56] duration metric: took 19.124042428s for fixHost
	I0729 14:38:31.253237 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.256005 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.256340 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.256375 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.256535 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.256732 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.256927 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.257075 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.257272 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:31.257445 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:31.257455 1039759 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:38:31.361488 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263911.340365932
	
	I0729 14:38:31.361512 1039759 fix.go:216] guest clock: 1722263911.340365932
	I0729 14:38:31.361519 1039759 fix.go:229] Guest: 2024-07-29 14:38:31.340365932 +0000 UTC Remote: 2024-07-29 14:38:31.253213714 +0000 UTC m=+217.413183116 (delta=87.152218ms)
	I0729 14:38:31.361572 1039759 fix.go:200] guest clock delta is within tolerance: 87.152218ms
	I0729 14:38:31.361583 1039759 start.go:83] releasing machines lock for "old-k8s-version-360866", held for 19.232453759s
	I0729 14:38:31.361611 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.361921 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:31.364981 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.365412 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.365441 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.365648 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.366227 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.366482 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.366583 1039759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:38:31.366644 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.366761 1039759 ssh_runner.go:195] Run: cat /version.json
	I0729 14:38:31.366797 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.369658 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.369699 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.370051 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.370081 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.370105 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.370125 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.370309 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.370325 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.370567 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.370568 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.370773 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.370809 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.370958 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:31.370957 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:31.472108 1039759 ssh_runner.go:195] Run: systemctl --version
	I0729 14:38:31.478939 1039759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:38:31.630720 1039759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:38:31.637768 1039759 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:38:31.637874 1039759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:38:31.655476 1039759 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:38:31.655504 1039759 start.go:495] detecting cgroup driver to use...
	I0729 14:38:31.655584 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:38:31.679387 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:38:31.704260 1039759 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:38:31.704318 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:38:31.727875 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:38:31.743197 1039759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:38:31.867502 1039759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:38:32.035088 1039759 docker.go:233] disabling docker service ...
	I0729 14:38:32.035169 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:38:32.050118 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:38:32.064828 1039759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:38:32.202938 1039759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:38:32.333330 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:38:32.348845 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:38:32.369848 1039759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 14:38:32.369923 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.381787 1039759 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:38:32.381893 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.394331 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.405323 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.417259 1039759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:38:32.428997 1039759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:38:32.440934 1039759 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:38:32.441003 1039759 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:38:32.454949 1039759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:38:32.466042 1039759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:32.596308 1039759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:38:32.762548 1039759 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:38:32.762632 1039759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:38:32.768336 1039759 start.go:563] Will wait 60s for crictl version
	I0729 14:38:32.768447 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:32.772850 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:38:32.829827 1039759 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:38:32.829936 1039759 ssh_runner.go:195] Run: crio --version
	I0729 14:38:32.863269 1039759 ssh_runner.go:195] Run: crio --version
	I0729 14:38:32.897768 1039759 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 14:38:32.899209 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:32.902257 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:32.902649 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:32.902680 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:32.902928 1039759 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 14:38:32.908590 1039759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:32.921952 1039759 kubeadm.go:883] updating cluster {Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:38:32.922094 1039759 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 14:38:32.922141 1039759 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:32.969932 1039759 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 14:38:32.970003 1039759 ssh_runner.go:195] Run: which lz4
	I0729 14:38:32.974564 1039759 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 14:38:32.980128 1039759 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 14:38:32.980173 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 14:38:32.795590 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:35.295541 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:31.750580 1038758 main.go:141] libmachine: (no-preload-603534) Waiting to get IP...
	I0729 14:38:31.751732 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:31.752236 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:31.752340 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:31.752236 1040763 retry.go:31] will retry after 239.008836ms: waiting for machine to come up
	I0729 14:38:31.993011 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:31.993538 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:31.993569 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:31.993481 1040763 retry.go:31] will retry after 288.863538ms: waiting for machine to come up
	I0729 14:38:32.284306 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:32.284941 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:32.284980 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:32.284867 1040763 retry.go:31] will retry after 410.903425ms: waiting for machine to come up
	I0729 14:38:32.697686 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:32.698291 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:32.698322 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:32.698227 1040763 retry.go:31] will retry after 423.090324ms: waiting for machine to come up
	I0729 14:38:33.122914 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:33.123550 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:33.123579 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:33.123500 1040763 retry.go:31] will retry after 744.030348ms: waiting for machine to come up
	I0729 14:38:33.869849 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:33.870499 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:33.870523 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:33.870456 1040763 retry.go:31] will retry after 888.516658ms: waiting for machine to come up
	I0729 14:38:34.760145 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:34.760594 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:34.760627 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:34.760534 1040763 retry.go:31] will retry after 889.371631ms: waiting for machine to come up
	I0729 14:38:35.651169 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:35.651700 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:35.651731 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:35.651636 1040763 retry.go:31] will retry after 1.200333492s: waiting for machine to come up
	I0729 14:38:33.181695 1039440 pod_ready.go:102] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:35.672201 1039440 pod_ready.go:102] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:34.707140 1039759 crio.go:462] duration metric: took 1.732619622s to copy over tarball
	I0729 14:38:34.707232 1039759 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 14:38:37.740076 1039759 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.032804006s)
	I0729 14:38:37.740105 1039759 crio.go:469] duration metric: took 3.032930405s to extract the tarball
	I0729 14:38:37.740113 1039759 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 14:38:37.786934 1039759 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:37.827451 1039759 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 14:38:37.827484 1039759 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 14:38:37.827576 1039759 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:37.827606 1039759 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:37.827624 1039759 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 14:38:37.827678 1039759 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:37.827702 1039759 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:37.827607 1039759 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:37.827683 1039759 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 14:38:37.827678 1039759 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:37.829621 1039759 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:37.829709 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:37.829714 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:37.829714 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:37.829724 1039759 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 14:38:37.829628 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:37.829808 1039759 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 14:38:37.829625 1039759 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:38.113249 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:38.373433 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:38.378382 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:38.380909 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:38.382431 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 14:38:38.391678 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:38.392565 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:38.419739 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 14:38:38.491174 1039759 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 14:38:38.491255 1039759 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:38.491320 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.570681 1039759 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 14:38:38.570784 1039759 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 14:38:38.570832 1039759 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 14:38:38.570889 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.570792 1039759 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:38.570721 1039759 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 14:38:38.570966 1039759 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:38.570977 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.570992 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.576687 1039759 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 14:38:38.576728 1039759 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:38.576769 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.587650 1039759 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 14:38:38.587699 1039759 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:38.587701 1039759 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 14:38:38.587738 1039759 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 14:38:38.587753 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.587791 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.587866 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:38.587883 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 14:38:38.587913 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:38.587948 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:38.591209 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:38.599567 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:38.610869 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 14:38:38.742939 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 14:38:38.742974 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 14:38:38.743091 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 14:38:38.743098 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 14:38:38.745789 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 14:38:38.745857 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 14:38:38.753643 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 14:38:38.753704 1039759 cache_images.go:92] duration metric: took 926.203812ms to LoadCachedImages
	W0729 14:38:38.753790 1039759 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0729 14:38:38.753804 1039759 kubeadm.go:934] updating node { 192.168.39.71 8443 v1.20.0 crio true true} ...
	I0729 14:38:38.753931 1039759 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-360866 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:38:38.753992 1039759 ssh_runner.go:195] Run: crio config
	I0729 14:38:38.802220 1039759 cni.go:84] Creating CNI manager for ""
	I0729 14:38:38.802246 1039759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:38:38.802258 1039759 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:38:38.802285 1039759 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.71 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-360866 NodeName:old-k8s-version-360866 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.71"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.71 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 14:38:38.802487 1039759 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.71
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-360866"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.71
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.71"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:38:38.802591 1039759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 14:38:38.816832 1039759 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:38:38.816934 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:38:38.827468 1039759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 14:38:38.847125 1039759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 14:38:38.865302 1039759 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 14:38:37.795799 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:40.294979 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:36.853388 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:36.853944 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:36.853979 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:36.853881 1040763 retry.go:31] will retry after 1.750535475s: waiting for machine to come up
	I0729 14:38:38.605644 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:38.606135 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:38.606185 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:38.606079 1040763 retry.go:31] will retry after 2.245294623s: waiting for machine to come up
	I0729 14:38:40.853761 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:40.854277 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:40.854311 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:40.854214 1040763 retry.go:31] will retry after 1.864975071s: waiting for machine to come up
	I0729 14:38:38.299326 1039440 pod_ready.go:102] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:39.170692 1039440 pod_ready.go:92] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:39.170720 1039440 pod_ready.go:81] duration metric: took 8.008696752s for pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:39.170735 1039440 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:39.177419 1039440 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:39.177449 1039440 pod_ready.go:81] duration metric: took 6.705958ms for pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:39.177463 1039440 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.185538 1039440 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:41.185566 1039440 pod_ready.go:81] duration metric: took 2.008093791s for pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.185580 1039440 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-p6dv5" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.193833 1039440 pod_ready.go:92] pod "kube-proxy-p6dv5" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:41.193864 1039440 pod_ready.go:81] duration metric: took 8.275486ms for pod "kube-proxy-p6dv5" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.193878 1039440 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.200931 1039440 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:41.200963 1039440 pod_ready.go:81] duration metric: took 7.075212ms for pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.200978 1039440 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:38.884267 1039759 ssh_runner.go:195] Run: grep 192.168.39.71	control-plane.minikube.internal$ /etc/hosts
	I0729 14:38:38.889206 1039759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.71	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:38.905643 1039759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:39.032065 1039759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:38:39.051892 1039759 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866 for IP: 192.168.39.71
	I0729 14:38:39.051991 1039759 certs.go:194] generating shared ca certs ...
	I0729 14:38:39.052019 1039759 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:39.052203 1039759 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:38:39.052258 1039759 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:38:39.052270 1039759 certs.go:256] generating profile certs ...
	I0729 14:38:39.091359 1039759 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/client.key
	I0729 14:38:39.091485 1039759 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.key.98c2aed0
	I0729 14:38:39.091554 1039759 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.key
	I0729 14:38:39.091718 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:38:39.091763 1039759 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:38:39.091776 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:38:39.091804 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:38:39.091835 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:38:39.091867 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:38:39.091924 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:39.092850 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:38:39.125528 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:38:39.153093 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:38:39.181324 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:38:39.235516 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 14:38:39.262599 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 14:38:39.293085 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:38:39.326318 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 14:38:39.361548 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:38:39.386876 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:38:39.412529 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:38:39.438418 1039759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:38:39.459519 1039759 ssh_runner.go:195] Run: openssl version
	I0729 14:38:39.466109 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:38:39.477941 1039759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:39.482748 1039759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:39.482820 1039759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:39.489099 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:38:39.500207 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:38:39.511513 1039759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:38:39.516125 1039759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:38:39.516183 1039759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:38:39.522297 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:38:39.533536 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:38:39.544996 1039759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:38:39.549681 1039759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:38:39.549733 1039759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:38:39.556332 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:38:39.571393 1039759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:38:39.578420 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:38:39.586316 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:38:39.593450 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:38:39.600604 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:38:39.607483 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:38:39.614692 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:38:39.621776 1039759 kubeadm.go:392] StartCluster: {Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:38:39.621893 1039759 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:38:39.621955 1039759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:38:39.673544 1039759 cri.go:89] found id: ""
	I0729 14:38:39.673634 1039759 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:38:39.687887 1039759 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 14:38:39.687912 1039759 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 14:38:39.687963 1039759 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 14:38:39.701616 1039759 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:38:39.702914 1039759 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-360866" does not appear in /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:38:39.703576 1039759 kubeconfig.go:62] /home/jenkins/minikube-integration/19338-974764/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-360866" cluster setting kubeconfig missing "old-k8s-version-360866" context setting]
	I0729 14:38:39.704951 1039759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:39.715056 1039759 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 14:38:39.728384 1039759 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.71
	I0729 14:38:39.728448 1039759 kubeadm.go:1160] stopping kube-system containers ...
	I0729 14:38:39.728466 1039759 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 14:38:39.728534 1039759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:38:39.778476 1039759 cri.go:89] found id: ""
	I0729 14:38:39.778561 1039759 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 14:38:39.800712 1039759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:38:39.813243 1039759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:38:39.813265 1039759 kubeadm.go:157] found existing configuration files:
	
	I0729 14:38:39.813323 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:38:39.824822 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:38:39.824897 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:38:39.834966 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:38:39.847660 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:38:39.847887 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:38:39.861117 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:38:39.873868 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:38:39.873936 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:38:39.884195 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:38:39.895155 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:38:39.895234 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:38:39.909138 1039759 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:38:39.920721 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:40.055932 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.173909 1039759 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.117933178s)
	I0729 14:38:41.173947 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.419684 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.550852 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.655941 1039759 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:38:41.656040 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:42.156080 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:42.656948 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:43.157127 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:43.656087 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:42.794217 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:45.293634 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:42.720182 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:42.720674 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:42.720701 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:42.720614 1040763 retry.go:31] will retry after 2.929394717s: waiting for machine to come up
	I0729 14:38:45.653508 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:45.654044 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:45.654069 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:45.653993 1040763 retry.go:31] will retry after 4.133064498s: waiting for machine to come up
	I0729 14:38:43.208287 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:45.706607 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:44.156583 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:44.657199 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:45.156268 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:45.656786 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:46.156393 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:46.656151 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:47.156507 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:47.656922 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:48.156840 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:48.656756 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:47.294322 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:49.795189 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:49.789721 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.790248 1038758 main.go:141] libmachine: (no-preload-603534) Found IP for machine: 192.168.61.116
	I0729 14:38:49.790272 1038758 main.go:141] libmachine: (no-preload-603534) Reserving static IP address...
	I0729 14:38:49.790290 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has current primary IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.790823 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "no-preload-603534", mac: "52:54:00:bf:94:45", ip: "192.168.61.116"} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:49.790860 1038758 main.go:141] libmachine: (no-preload-603534) Reserved static IP address: 192.168.61.116
	I0729 14:38:49.790883 1038758 main.go:141] libmachine: (no-preload-603534) DBG | skip adding static IP to network mk-no-preload-603534 - found existing host DHCP lease matching {name: "no-preload-603534", mac: "52:54:00:bf:94:45", ip: "192.168.61.116"}
	I0729 14:38:49.790920 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Getting to WaitForSSH function...
	I0729 14:38:49.790937 1038758 main.go:141] libmachine: (no-preload-603534) Waiting for SSH to be available...
	I0729 14:38:49.793243 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.793646 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:49.793679 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.793820 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Using SSH client type: external
	I0729 14:38:49.793850 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa (-rw-------)
	I0729 14:38:49.793884 1038758 main.go:141] libmachine: (no-preload-603534) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:38:49.793899 1038758 main.go:141] libmachine: (no-preload-603534) DBG | About to run SSH command:
	I0729 14:38:49.793961 1038758 main.go:141] libmachine: (no-preload-603534) DBG | exit 0
	I0729 14:38:49.924827 1038758 main.go:141] libmachine: (no-preload-603534) DBG | SSH cmd err, output: <nil>: 
	I0729 14:38:49.925188 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetConfigRaw
	I0729 14:38:49.925835 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetIP
	I0729 14:38:49.928349 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.928799 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:49.928830 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.929091 1038758 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/config.json ...
	I0729 14:38:49.929313 1038758 machine.go:94] provisionDockerMachine start ...
	I0729 14:38:49.929334 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:49.929556 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:49.932040 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.932431 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:49.932473 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.932629 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:49.932798 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:49.932930 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:49.933033 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:49.933142 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:49.933313 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:49.933324 1038758 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:38:50.049016 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 14:38:50.049059 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:38:50.049328 1038758 buildroot.go:166] provisioning hostname "no-preload-603534"
	I0729 14:38:50.049354 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:38:50.049566 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.052138 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.052532 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.052561 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.052736 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.052918 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.053093 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.053269 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.053462 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:50.053641 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:50.053653 1038758 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-603534 && echo "no-preload-603534" | sudo tee /etc/hostname
	I0729 14:38:50.189302 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-603534
	
	I0729 14:38:50.189341 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.192559 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.192949 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.192974 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.193248 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.193476 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.193689 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.193870 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.194082 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:50.194305 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:50.194329 1038758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-603534' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-603534/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-603534' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:38:50.322506 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:38:50.322540 1038758 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:38:50.322564 1038758 buildroot.go:174] setting up certificates
	I0729 14:38:50.322577 1038758 provision.go:84] configureAuth start
	I0729 14:38:50.322589 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:38:50.322938 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetIP
	I0729 14:38:50.325594 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.325957 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.325994 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.326139 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.328455 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.328803 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.328828 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.328950 1038758 provision.go:143] copyHostCerts
	I0729 14:38:50.329015 1038758 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:38:50.329025 1038758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:38:50.329078 1038758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:38:50.329165 1038758 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:38:50.329173 1038758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:38:50.329192 1038758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:38:50.329243 1038758 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:38:50.329249 1038758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:38:50.329264 1038758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:38:50.329310 1038758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.no-preload-603534 san=[127.0.0.1 192.168.61.116 localhost minikube no-preload-603534]
	I0729 14:38:50.447706 1038758 provision.go:177] copyRemoteCerts
	I0729 14:38:50.447777 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:38:50.447810 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.450714 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.451106 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.451125 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.451444 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.451679 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.451855 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.451975 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:38:50.539025 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:38:50.567887 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:38:50.594581 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 14:38:50.619475 1038758 provision.go:87] duration metric: took 296.880769ms to configureAuth
	I0729 14:38:50.619509 1038758 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:38:50.619708 1038758 config.go:182] Loaded profile config "no-preload-603534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 14:38:50.619797 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.622753 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.623121 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.623151 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.623331 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.623519 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.623684 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.623813 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.623971 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:50.624151 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:50.624168 1038758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:38:50.895618 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:38:50.895649 1038758 machine.go:97] duration metric: took 966.320375ms to provisionDockerMachine
	I0729 14:38:50.895662 1038758 start.go:293] postStartSetup for "no-preload-603534" (driver="kvm2")
	I0729 14:38:50.895684 1038758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:38:50.895717 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:50.896084 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:38:50.896112 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.899586 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.899998 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.900031 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.900168 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.900424 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.900622 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.900799 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:38:50.987195 1038758 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:38:50.991924 1038758 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:38:50.991952 1038758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:38:50.992025 1038758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:38:50.992111 1038758 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:38:50.992208 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:38:51.002048 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:51.029714 1038758 start.go:296] duration metric: took 134.037621ms for postStartSetup
	I0729 14:38:51.029758 1038758 fix.go:56] duration metric: took 19.66799406s for fixHost
	I0729 14:38:51.029782 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:51.032495 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.032819 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:51.032844 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.033049 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:51.033236 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:51.033377 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:51.033587 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:51.033767 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:51.034007 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:51.034021 1038758 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:38:51.149481 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263931.130931233
	
	I0729 14:38:51.149510 1038758 fix.go:216] guest clock: 1722263931.130931233
	I0729 14:38:51.149520 1038758 fix.go:229] Guest: 2024-07-29 14:38:51.130931233 +0000 UTC Remote: 2024-07-29 14:38:51.029761931 +0000 UTC m=+354.409484230 (delta=101.169302ms)
	I0729 14:38:51.149575 1038758 fix.go:200] guest clock delta is within tolerance: 101.169302ms
	I0729 14:38:51.149583 1038758 start.go:83] releasing machines lock for "no-preload-603534", held for 19.787859214s
	I0729 14:38:51.149617 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:51.149923 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetIP
	I0729 14:38:51.152671 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.153054 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:51.153081 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.153298 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:51.153898 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:51.154092 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:51.154192 1038758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:38:51.154245 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:51.154349 1038758 ssh_runner.go:195] Run: cat /version.json
	I0729 14:38:51.154378 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:51.157173 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.157200 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.157560 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:51.157592 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.157635 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:51.157654 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.157955 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:51.157976 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:51.158169 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:51.158195 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:51.158370 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:51.158381 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:51.158565 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:38:51.158572 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:38:51.260806 1038758 ssh_runner.go:195] Run: systemctl --version
	I0729 14:38:51.266847 1038758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:38:51.412637 1038758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:38:51.418879 1038758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:38:51.418954 1038758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:38:51.435946 1038758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:38:51.435978 1038758 start.go:495] detecting cgroup driver to use...
	I0729 14:38:51.436061 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:38:51.457517 1038758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:38:51.472718 1038758 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:38:51.472811 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:38:51.487062 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:38:51.501410 1038758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:38:51.617292 1038758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:38:47.708063 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:49.708506 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:52.209337 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:51.764302 1038758 docker.go:233] disabling docker service ...
	I0729 14:38:51.764386 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:38:51.779137 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:38:51.794372 1038758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:38:51.930402 1038758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:38:52.062691 1038758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:38:52.076796 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:38:52.095912 1038758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 14:38:52.095994 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.107507 1038758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:38:52.107588 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.119470 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.131252 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.141672 1038758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:38:52.152086 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.163682 1038758 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.189614 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.200279 1038758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:38:52.211878 1038758 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:38:52.211943 1038758 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:38:52.224909 1038758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:38:52.234312 1038758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:52.357370 1038758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:38:52.492520 1038758 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:38:52.492622 1038758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:38:52.497537 1038758 start.go:563] Will wait 60s for crictl version
	I0729 14:38:52.497595 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:52.501292 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:38:52.544320 1038758 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:38:52.544428 1038758 ssh_runner.go:195] Run: crio --version
	I0729 14:38:52.575452 1038758 ssh_runner.go:195] Run: crio --version
	I0729 14:38:52.605920 1038758 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 14:38:49.156539 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:49.656397 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:50.156909 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:50.656968 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:51.156321 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:51.656183 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:52.157099 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:52.656725 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:53.157009 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:53.656787 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:51.796331 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:53.799083 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:52.607410 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetIP
	I0729 14:38:52.610017 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:52.610296 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:52.610330 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:52.610553 1038758 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 14:38:52.614659 1038758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:52.626967 1038758 kubeadm.go:883] updating cluster {Name:no-preload-603534 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-603534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:38:52.627087 1038758 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 14:38:52.627124 1038758 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:52.662824 1038758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 14:38:52.662852 1038758 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 14:38:52.662901 1038758 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:52.662968 1038758 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:52.663040 1038758 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 14:38:52.663043 1038758 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:52.663066 1038758 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:52.662987 1038758 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:52.662987 1038758 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:52.663017 1038758 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:52.664360 1038758 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 14:38:52.664947 1038758 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:52.664965 1038758 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:52.664954 1038758 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:52.665015 1038758 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:52.665023 1038758 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:52.665351 1038758 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:52.665423 1038758 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:52.829143 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:52.829158 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:52.829541 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:52.851797 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:52.866728 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 14:38:52.884604 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:52.893636 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:52.946087 1038758 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 14:38:52.946134 1038758 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 14:38:52.946160 1038758 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:52.946170 1038758 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:52.946173 1038758 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 14:38:52.946192 1038758 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:52.946216 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:52.946221 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:52.946217 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:52.954361 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:53.001715 1038758 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 14:38:53.001766 1038758 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:53.001826 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:53.106651 1038758 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 14:38:53.106713 1038758 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:53.106770 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:53.106838 1038758 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 14:38:53.106883 1038758 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:53.106921 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:53.106927 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:53.106962 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:53.107012 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:53.107038 1038758 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 14:38:53.107067 1038758 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:53.107079 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:53.107092 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:53.131562 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:53.212076 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:53.212199 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 14:38:53.212272 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 14:38:53.214338 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 14:38:53.214430 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 14:38:53.216771 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:53.216941 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 14:38:53.217037 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 14:38:53.220214 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 14:38:53.220306 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 14:38:53.272021 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 14:38:53.272140 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 14:38:53.275939 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 14:38:53.275988 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 14:38:53.276008 1038758 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 14:38:53.276009 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 14:38:53.276029 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 14:38:53.276054 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 14:38:53.301528 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 14:38:53.301578 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 14:38:53.301600 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 14:38:53.301647 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 14:38:53.301759 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 14:38:55.357295 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.08120738s)
	I0729 14:38:55.357329 1038758 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.081270007s)
	I0729 14:38:55.357371 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 14:38:55.357338 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 14:38:55.357384 1038758 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.055605102s)
	I0729 14:38:55.357406 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 14:38:55.357407 1038758 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 14:38:55.357464 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 14:38:54.708330 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:57.207468 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:54.156921 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:54.656957 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:55.156201 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:55.656783 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:56.156180 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:56.656984 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:57.156610 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:57.656127 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:58.156785 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:58.656192 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:56.295143 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:58.795511 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:57.217512 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.860011805s)
	I0729 14:38:57.217539 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 14:38:57.217570 1038758 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 14:38:57.217634 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 14:38:59.187398 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.969733063s)
	I0729 14:38:59.187443 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 14:38:59.187482 1038758 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 14:38:59.187562 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 14:39:01.138568 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.950970137s)
	I0729 14:39:01.138617 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 14:39:01.138654 1038758 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 14:39:01.138740 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 14:38:59.207657 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:01.208795 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:59.156740 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:59.656223 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:00.156726 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:00.656593 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:01.156115 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:01.656364 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:02.157069 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:02.656491 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:03.156938 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:03.656898 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:01.293858 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:03.484613 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:05.793953 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:04.231830 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.093043665s)
	I0729 14:39:04.231866 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 14:39:04.231897 1038758 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 14:39:04.231963 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 14:39:05.182458 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 14:39:05.182512 1038758 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 14:39:05.182566 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 14:39:03.209198 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:05.707557 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:04.157177 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:04.656505 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:05.156530 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:05.656389 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:06.156606 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:06.657121 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:07.157048 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:07.656497 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:08.156327 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:08.656868 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:07.794522 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:09.794886 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:07.253615 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.070972791s)
	I0729 14:39:07.253665 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 14:39:07.253700 1038758 cache_images.go:123] Successfully loaded all cached images
	I0729 14:39:07.253707 1038758 cache_images.go:92] duration metric: took 14.590842072s to LoadCachedImages
	I0729 14:39:07.253720 1038758 kubeadm.go:934] updating node { 192.168.61.116 8443 v1.31.0-beta.0 crio true true} ...
	I0729 14:39:07.253899 1038758 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-603534 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-603534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:39:07.253980 1038758 ssh_runner.go:195] Run: crio config
	I0729 14:39:07.309694 1038758 cni.go:84] Creating CNI manager for ""
	I0729 14:39:07.309720 1038758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:39:07.309731 1038758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:39:07.309754 1038758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-603534 NodeName:no-preload-603534 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 14:39:07.309916 1038758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-603534"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:39:07.309985 1038758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 14:39:07.321876 1038758 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:39:07.321967 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:39:07.333057 1038758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0729 14:39:07.350193 1038758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 14:39:07.367171 1038758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0729 14:39:07.384123 1038758 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0729 14:39:07.387896 1038758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:39:07.400317 1038758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:39:07.525822 1038758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:39:07.545142 1038758 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534 for IP: 192.168.61.116
	I0729 14:39:07.545167 1038758 certs.go:194] generating shared ca certs ...
	I0729 14:39:07.545189 1038758 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:39:07.545389 1038758 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:39:07.545448 1038758 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:39:07.545463 1038758 certs.go:256] generating profile certs ...
	I0729 14:39:07.545582 1038758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/client.key
	I0729 14:39:07.545658 1038758 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/apiserver.key.117a155a
	I0729 14:39:07.545725 1038758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/proxy-client.key
	I0729 14:39:07.545881 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:39:07.545913 1038758 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:39:07.545922 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:39:07.545945 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:39:07.545969 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:39:07.545990 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:39:07.546027 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:39:07.546679 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:39:07.582488 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:39:07.617327 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:39:07.647627 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:39:07.685799 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 14:39:07.720365 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 14:39:07.744627 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:39:07.771409 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 14:39:07.797570 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:39:07.820888 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:39:07.843714 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:39:07.867365 1038758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:39:07.884283 1038758 ssh_runner.go:195] Run: openssl version
	I0729 14:39:07.890379 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:39:07.901894 1038758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:39:07.906431 1038758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:39:07.906487 1038758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:39:07.912284 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:39:07.923393 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:39:07.934119 1038758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:39:07.938563 1038758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:39:07.938620 1038758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:39:07.944115 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:39:07.954815 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:39:07.965864 1038758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:39:07.970695 1038758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:39:07.970761 1038758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:39:07.977340 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:39:07.990416 1038758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:39:07.995446 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:39:08.001615 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:39:08.007621 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:39:08.013648 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:39:08.019525 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:39:08.025505 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:39:08.031480 1038758 kubeadm.go:392] StartCluster: {Name:no-preload-603534 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-603534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:39:08.031592 1038758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:39:08.031657 1038758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:39:08.077847 1038758 cri.go:89] found id: ""
	I0729 14:39:08.077936 1038758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:39:08.088616 1038758 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 14:39:08.088639 1038758 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 14:39:08.088704 1038758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 14:39:08.101150 1038758 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:39:08.102305 1038758 kubeconfig.go:125] found "no-preload-603534" server: "https://192.168.61.116:8443"
	I0729 14:39:08.105529 1038758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 14:39:08.117031 1038758 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.116
	I0729 14:39:08.117070 1038758 kubeadm.go:1160] stopping kube-system containers ...
	I0729 14:39:08.117085 1038758 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 14:39:08.117148 1038758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:39:08.171626 1038758 cri.go:89] found id: ""
	I0729 14:39:08.171706 1038758 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 14:39:08.190491 1038758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:39:08.200776 1038758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:39:08.200806 1038758 kubeadm.go:157] found existing configuration files:
	
	I0729 14:39:08.200873 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:39:08.211430 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:39:08.211483 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:39:08.221865 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:39:08.231668 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:39:08.231719 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:39:08.242027 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:39:08.251585 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:39:08.251639 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:39:08.261521 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:39:08.271210 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:39:08.271284 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:39:08.281112 1038758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:39:08.290948 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:08.417397 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:09.400064 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:09.590859 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:09.670134 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:09.781580 1038758 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:39:09.781719 1038758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.282592 1038758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.781923 1038758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.843114 1038758 api_server.go:72] duration metric: took 1.061535691s to wait for apiserver process to appear ...
	I0729 14:39:10.843151 1038758 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:39:10.843182 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:10.843715 1038758 api_server.go:269] stopped: https://192.168.61.116:8443/healthz: Get "https://192.168.61.116:8443/healthz": dial tcp 192.168.61.116:8443: connect: connection refused
	I0729 14:39:11.343301 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:08.207563 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:10.208276 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:09.156858 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:09.656910 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.156126 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.657149 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:11.156223 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:11.657184 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:12.156454 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:12.656896 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:13.156693 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:13.656971 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:13.993249 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:39:13.993278 1038758 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:39:13.993298 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:14.011972 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:39:14.012012 1038758 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:39:14.343228 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:14.347946 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:39:14.347983 1038758 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:39:14.844144 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:14.858278 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:39:14.858311 1038758 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:39:15.343885 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:15.350223 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0729 14:39:15.360468 1038758 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 14:39:15.360513 1038758 api_server.go:131] duration metric: took 4.517353977s to wait for apiserver health ...
	I0729 14:39:15.360524 1038758 cni.go:84] Creating CNI manager for ""
	I0729 14:39:15.360532 1038758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:39:15.362679 1038758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:39:12.293516 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:14.294107 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:15.364237 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:39:15.379974 1038758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:39:15.422444 1038758 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:39:15.441468 1038758 system_pods.go:59] 8 kube-system pods found
	I0729 14:39:15.441512 1038758 system_pods.go:61] "coredns-5cfdc65f69-tjdx4" [986cdef3-de61-4c0f-bc75-fae4f6b44a37] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 14:39:15.441525 1038758 system_pods.go:61] "etcd-no-preload-603534" [e27f5761-5322-4d88-b90a-bcff42c9dfa5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 14:39:15.441537 1038758 system_pods.go:61] "kube-apiserver-no-preload-603534" [33ed9f7c-1240-40cf-b51d-125b3473bfd5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 14:39:15.441547 1038758 system_pods.go:61] "kube-controller-manager-no-preload-603534" [f79520a2-380e-4d8a-b1ff-78c6cd3d3741] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 14:39:15.441559 1038758 system_pods.go:61] "kube-proxy-ftpk5" [a5471ad7-5fd3-49b7-8631-4ca2962761d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 14:39:15.441568 1038758 system_pods.go:61] "kube-scheduler-no-preload-603534" [860e262c-f036-4181-a0ad-8ba0058a47d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 14:39:15.441580 1038758 system_pods.go:61] "metrics-server-78fcd8795b-59sbc" [8af92987-ce8d-434f-93de-16d0adc35fa5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:39:15.441598 1038758 system_pods.go:61] "storage-provisioner" [579d0cc8-e30e-4ee3-ac55-c2f0bc5871e1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 14:39:15.441606 1038758 system_pods.go:74] duration metric: took 19.133029ms to wait for pod list to return data ...
	I0729 14:39:15.441618 1038758 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:39:15.445594 1038758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:39:15.445630 1038758 node_conditions.go:123] node cpu capacity is 2
	I0729 14:39:15.445646 1038758 node_conditions.go:105] duration metric: took 4.019018ms to run NodePressure ...
	I0729 14:39:15.445678 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:15.743404 1038758 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 14:39:15.751028 1038758 kubeadm.go:739] kubelet initialised
	I0729 14:39:15.751050 1038758 kubeadm.go:740] duration metric: took 7.619795ms waiting for restarted kubelet to initialise ...
	I0729 14:39:15.751059 1038758 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:39:15.759157 1038758 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:12.708704 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:15.208434 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:14.157127 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:14.656806 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:15.156564 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:15.656881 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:16.156239 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:16.656440 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:17.157130 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:17.656240 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:18.156161 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:18.656808 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:16.294741 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:18.797700 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:17.768132 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:20.265670 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:17.709929 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:20.206710 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:22.207809 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:19.156721 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:19.656766 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:20.156352 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:20.656788 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:21.156179 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:21.656213 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:22.156475 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:22.656275 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:23.156592 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:23.656979 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:21.294265 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:23.294366 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:25.794648 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:22.265947 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:24.266644 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:24.708214 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:27.208824 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:24.156798 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:24.656473 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:25.156551 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:25.656356 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:26.156887 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:26.656332 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:27.156494 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:27.656839 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:28.156763 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:28.656512 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:27.795415 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:30.293460 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:26.766260 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:29.265817 1038758 pod_ready.go:92] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.265851 1038758 pod_ready.go:81] duration metric: took 13.506661461s for pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.265865 1038758 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.276021 1038758 pod_ready.go:92] pod "etcd-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.276043 1038758 pod_ready.go:81] duration metric: took 10.172055ms for pod "etcd-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.276052 1038758 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.280197 1038758 pod_ready.go:92] pod "kube-apiserver-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.280215 1038758 pod_ready.go:81] duration metric: took 4.156785ms for pod "kube-apiserver-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.280223 1038758 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.284076 1038758 pod_ready.go:92] pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.284096 1038758 pod_ready.go:81] duration metric: took 3.865927ms for pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.284122 1038758 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ftpk5" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.288280 1038758 pod_ready.go:92] pod "kube-proxy-ftpk5" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.288297 1038758 pod_ready.go:81] duration metric: took 4.16843ms for pod "kube-proxy-ftpk5" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.288305 1038758 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.666771 1038758 pod_ready.go:92] pod "kube-scheduler-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.666802 1038758 pod_ready.go:81] duration metric: took 378.49001ms for pod "kube-scheduler-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.666813 1038758 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.706596 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:32.208095 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:29.156096 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:29.656289 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:30.156693 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:30.656795 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:31.156756 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:31.656888 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:32.156563 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:32.656795 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:33.156271 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:33.656562 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:32.293988 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:34.793456 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:31.674203 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:34.174002 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:34.708005 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:37.206789 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:34.157046 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:34.656398 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:35.156198 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:35.656763 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:36.156542 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:36.656994 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:37.156808 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:37.657093 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:38.156119 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:38.657017 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:36.793771 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:39.294267 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:36.676693 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:39.172713 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:41.174348 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:39.207584 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:41.707645 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:39.156909 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:39.656176 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:40.156455 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:40.656609 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:41.156891 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:41.656327 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:41.656423 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:41.701839 1039759 cri.go:89] found id: ""
	I0729 14:39:41.701863 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.701872 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:41.701878 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:41.701942 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:41.738281 1039759 cri.go:89] found id: ""
	I0729 14:39:41.738308 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.738315 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:41.738321 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:41.738377 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:41.771954 1039759 cri.go:89] found id: ""
	I0729 14:39:41.771981 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.771990 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:41.771996 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:41.772060 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:41.806157 1039759 cri.go:89] found id: ""
	I0729 14:39:41.806182 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.806190 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:41.806196 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:41.806249 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:41.841284 1039759 cri.go:89] found id: ""
	I0729 14:39:41.841312 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.841319 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:41.841325 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:41.841394 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:41.875864 1039759 cri.go:89] found id: ""
	I0729 14:39:41.875893 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.875902 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:41.875908 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:41.875962 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:41.909797 1039759 cri.go:89] found id: ""
	I0729 14:39:41.909824 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.909833 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:41.909840 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:41.909892 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:41.943886 1039759 cri.go:89] found id: ""
	I0729 14:39:41.943912 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.943920 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:41.943929 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:41.943944 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:41.983224 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:41.983254 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:42.035264 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:42.035303 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:42.049343 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:42.049369 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:42.171904 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:42.171924 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:42.171947 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:41.295209 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:43.795811 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:43.673853 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:45.674302 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:44.207555 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:46.707384 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:44.738629 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:44.753497 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:44.753582 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:44.793256 1039759 cri.go:89] found id: ""
	I0729 14:39:44.793283 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.793291 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:44.793298 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:44.793363 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:44.833698 1039759 cri.go:89] found id: ""
	I0729 14:39:44.833726 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.833733 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:44.833739 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:44.833792 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:44.876328 1039759 cri.go:89] found id: ""
	I0729 14:39:44.876357 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.876366 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:44.876372 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:44.876471 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:44.918091 1039759 cri.go:89] found id: ""
	I0729 14:39:44.918121 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.918132 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:44.918140 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:44.918210 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:44.965105 1039759 cri.go:89] found id: ""
	I0729 14:39:44.965137 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.965149 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:44.965157 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:44.965228 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:45.014119 1039759 cri.go:89] found id: ""
	I0729 14:39:45.014150 1039759 logs.go:276] 0 containers: []
	W0729 14:39:45.014162 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:45.014170 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:45.014243 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:45.059826 1039759 cri.go:89] found id: ""
	I0729 14:39:45.059858 1039759 logs.go:276] 0 containers: []
	W0729 14:39:45.059870 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:45.059879 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:45.059946 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:45.099666 1039759 cri.go:89] found id: ""
	I0729 14:39:45.099706 1039759 logs.go:276] 0 containers: []
	W0729 14:39:45.099717 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:45.099730 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:45.099748 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:45.144219 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:45.144263 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:45.199719 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:45.199754 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:45.214225 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:45.214260 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:45.289090 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:45.289119 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:45.289138 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:47.860797 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:47.874523 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:47.874606 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:47.913570 1039759 cri.go:89] found id: ""
	I0729 14:39:47.913599 1039759 logs.go:276] 0 containers: []
	W0729 14:39:47.913608 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:47.913615 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:47.913674 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:47.946699 1039759 cri.go:89] found id: ""
	I0729 14:39:47.946725 1039759 logs.go:276] 0 containers: []
	W0729 14:39:47.946734 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:47.946740 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:47.946792 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:47.986492 1039759 cri.go:89] found id: ""
	I0729 14:39:47.986533 1039759 logs.go:276] 0 containers: []
	W0729 14:39:47.986546 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:47.986554 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:47.986635 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:48.027232 1039759 cri.go:89] found id: ""
	I0729 14:39:48.027260 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.027268 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:48.027274 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:48.027327 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:48.065119 1039759 cri.go:89] found id: ""
	I0729 14:39:48.065145 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.065153 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:48.065159 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:48.065217 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:48.105087 1039759 cri.go:89] found id: ""
	I0729 14:39:48.105119 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.105128 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:48.105134 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:48.105199 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:48.144684 1039759 cri.go:89] found id: ""
	I0729 14:39:48.144718 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.144730 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:48.144737 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:48.144816 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:48.180350 1039759 cri.go:89] found id: ""
	I0729 14:39:48.180380 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.180389 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:48.180401 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:48.180436 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:48.259859 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:48.259905 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:48.301132 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:48.301163 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:48.352753 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:48.352795 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:48.365936 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:48.365969 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:48.434634 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:46.293123 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:48.293674 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:50.294113 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:47.674411 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:50.173727 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:48.707887 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:51.207444 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:50.934903 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:50.948702 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:50.948787 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:50.982889 1039759 cri.go:89] found id: ""
	I0729 14:39:50.982917 1039759 logs.go:276] 0 containers: []
	W0729 14:39:50.982927 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:50.982933 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:50.983010 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:51.020679 1039759 cri.go:89] found id: ""
	I0729 14:39:51.020713 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.020726 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:51.020734 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:51.020818 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:51.055114 1039759 cri.go:89] found id: ""
	I0729 14:39:51.055147 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.055158 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:51.055166 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:51.055237 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:51.089053 1039759 cri.go:89] found id: ""
	I0729 14:39:51.089087 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.089099 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:51.089108 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:51.089184 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:51.125823 1039759 cri.go:89] found id: ""
	I0729 14:39:51.125851 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.125861 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:51.125868 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:51.125938 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:51.162645 1039759 cri.go:89] found id: ""
	I0729 14:39:51.162683 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.162694 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:51.162702 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:51.162767 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:51.196820 1039759 cri.go:89] found id: ""
	I0729 14:39:51.196849 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.196857 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:51.196864 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:51.196937 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:51.236442 1039759 cri.go:89] found id: ""
	I0729 14:39:51.236469 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.236479 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:51.236491 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:51.236506 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:51.317077 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:51.317101 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:51.317119 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:51.398118 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:51.398172 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:51.437096 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:51.437128 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:51.488949 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:51.488992 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:52.795544 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:55.294184 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:52.174241 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:54.672702 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:53.207592 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:55.706971 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:54.004536 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:54.019400 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:54.019480 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:54.054592 1039759 cri.go:89] found id: ""
	I0729 14:39:54.054626 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.054639 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:54.054647 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:54.054712 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:54.090184 1039759 cri.go:89] found id: ""
	I0729 14:39:54.090217 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.090227 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:54.090234 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:54.090304 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:54.129977 1039759 cri.go:89] found id: ""
	I0729 14:39:54.130007 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.130016 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:54.130022 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:54.130081 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:54.170940 1039759 cri.go:89] found id: ""
	I0729 14:39:54.170970 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.170980 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:54.170988 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:54.171053 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:54.206197 1039759 cri.go:89] found id: ""
	I0729 14:39:54.206224 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.206244 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:54.206251 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:54.206340 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:54.246929 1039759 cri.go:89] found id: ""
	I0729 14:39:54.246963 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.246973 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:54.246980 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:54.247049 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:54.286202 1039759 cri.go:89] found id: ""
	I0729 14:39:54.286231 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.286240 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:54.286245 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:54.286315 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:54.321784 1039759 cri.go:89] found id: ""
	I0729 14:39:54.321815 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.321824 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:54.321837 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:54.321860 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:54.363159 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:54.363187 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:54.416151 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:54.416194 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:54.429824 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:54.429852 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:54.506348 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:54.506373 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:54.506390 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:57.094810 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:57.108163 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:57.108238 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:57.143556 1039759 cri.go:89] found id: ""
	I0729 14:39:57.143588 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.143601 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:57.143608 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:57.143678 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:57.177664 1039759 cri.go:89] found id: ""
	I0729 14:39:57.177695 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.177706 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:57.177714 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:57.177801 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:57.212046 1039759 cri.go:89] found id: ""
	I0729 14:39:57.212106 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.212231 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:57.212249 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:57.212323 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:57.252518 1039759 cri.go:89] found id: ""
	I0729 14:39:57.252549 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.252558 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:57.252564 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:57.252677 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:57.287045 1039759 cri.go:89] found id: ""
	I0729 14:39:57.287069 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.287077 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:57.287084 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:57.287141 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:57.324553 1039759 cri.go:89] found id: ""
	I0729 14:39:57.324588 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.324599 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:57.324607 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:57.324684 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:57.358761 1039759 cri.go:89] found id: ""
	I0729 14:39:57.358801 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.358811 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:57.358819 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:57.358898 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:57.402023 1039759 cri.go:89] found id: ""
	I0729 14:39:57.402051 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.402062 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:57.402074 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:57.402094 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:57.445600 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:57.445632 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:57.501876 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:57.501911 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:57.518264 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:57.518299 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:57.593247 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:57.593274 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:57.593292 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:57.793782 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:59.794287 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:56.673243 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:59.174416 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:57.707618 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:00.208574 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:00.181109 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:00.194553 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:00.194641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:00.237761 1039759 cri.go:89] found id: ""
	I0729 14:40:00.237801 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.237814 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:00.237829 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:00.237901 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:00.273113 1039759 cri.go:89] found id: ""
	I0729 14:40:00.273145 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.273157 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:00.273166 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:00.273232 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:00.312136 1039759 cri.go:89] found id: ""
	I0729 14:40:00.312169 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.312176 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:00.312182 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:00.312249 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:00.349610 1039759 cri.go:89] found id: ""
	I0729 14:40:00.349642 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.349654 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:00.349662 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:00.349792 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:00.384121 1039759 cri.go:89] found id: ""
	I0729 14:40:00.384148 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.384157 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:00.384163 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:00.384233 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:00.419684 1039759 cri.go:89] found id: ""
	I0729 14:40:00.419720 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.419731 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:00.419739 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:00.419809 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:00.453905 1039759 cri.go:89] found id: ""
	I0729 14:40:00.453937 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.453945 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:00.453951 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:00.454023 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:00.490124 1039759 cri.go:89] found id: ""
	I0729 14:40:00.490149 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.490158 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:00.490168 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:00.490185 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:00.562684 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:00.562713 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:00.562735 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:00.643860 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:00.643899 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:00.683247 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:00.683276 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:00.734131 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:00.734166 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:03.249468 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:03.262712 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:03.262788 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:03.300774 1039759 cri.go:89] found id: ""
	I0729 14:40:03.300801 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.300816 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:03.300823 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:03.300891 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:03.335367 1039759 cri.go:89] found id: ""
	I0729 14:40:03.335398 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.335409 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:03.335419 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:03.335488 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:03.375683 1039759 cri.go:89] found id: ""
	I0729 14:40:03.375717 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.375728 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:03.375734 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:03.375814 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:03.409593 1039759 cri.go:89] found id: ""
	I0729 14:40:03.409623 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.409631 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:03.409637 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:03.409711 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:03.444531 1039759 cri.go:89] found id: ""
	I0729 14:40:03.444566 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.444578 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:03.444585 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:03.444655 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:03.479446 1039759 cri.go:89] found id: ""
	I0729 14:40:03.479476 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.479487 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:03.479495 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:03.479563 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:03.517277 1039759 cri.go:89] found id: ""
	I0729 14:40:03.517311 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.517321 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:03.517329 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:03.517396 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:03.556343 1039759 cri.go:89] found id: ""
	I0729 14:40:03.556373 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.556381 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:03.556391 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:03.556422 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:03.610156 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:03.610196 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:03.624776 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:03.624812 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:03.696584 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:03.696609 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:03.696625 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:03.775066 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:03.775109 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:01.794683 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:03.795112 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:01.673980 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:04.173900 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:02.706731 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:04.707655 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:07.207027 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:06.319720 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:06.332865 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:06.332937 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:06.366576 1039759 cri.go:89] found id: ""
	I0729 14:40:06.366608 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.366631 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:06.366639 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:06.366730 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:06.402710 1039759 cri.go:89] found id: ""
	I0729 14:40:06.402735 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.402743 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:06.402748 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:06.402804 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:06.439048 1039759 cri.go:89] found id: ""
	I0729 14:40:06.439095 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.439116 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:06.439125 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:06.439196 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:06.473407 1039759 cri.go:89] found id: ""
	I0729 14:40:06.473443 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.473456 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:06.473464 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:06.473544 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:06.507278 1039759 cri.go:89] found id: ""
	I0729 14:40:06.507309 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.507319 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:06.507327 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:06.507396 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:06.541573 1039759 cri.go:89] found id: ""
	I0729 14:40:06.541600 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.541608 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:06.541617 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:06.541679 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:06.587666 1039759 cri.go:89] found id: ""
	I0729 14:40:06.587697 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.587707 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:06.587714 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:06.587773 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:06.622415 1039759 cri.go:89] found id: ""
	I0729 14:40:06.622448 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.622459 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:06.622478 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:06.622497 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:06.659987 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:06.660019 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:06.716303 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:06.716338 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:06.731051 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:06.731076 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:06.809014 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:06.809045 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:06.809064 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:06.293552 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:08.294453 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:10.295216 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:06.674445 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:09.174349 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:09.207784 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:11.208318 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:09.387843 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:09.401894 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:09.401984 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:09.439385 1039759 cri.go:89] found id: ""
	I0729 14:40:09.439425 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.439438 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:09.439446 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:09.439506 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:09.474307 1039759 cri.go:89] found id: ""
	I0729 14:40:09.474340 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.474352 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:09.474361 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:09.474434 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:09.508198 1039759 cri.go:89] found id: ""
	I0729 14:40:09.508233 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.508245 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:09.508253 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:09.508325 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:09.543729 1039759 cri.go:89] found id: ""
	I0729 14:40:09.543762 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.543772 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:09.543779 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:09.543847 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:09.598723 1039759 cri.go:89] found id: ""
	I0729 14:40:09.598760 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.598769 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:09.598775 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:09.598841 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:09.636009 1039759 cri.go:89] found id: ""
	I0729 14:40:09.636038 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.636050 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:09.636058 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:09.636126 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:09.675590 1039759 cri.go:89] found id: ""
	I0729 14:40:09.675618 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.675628 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:09.675636 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:09.675698 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:09.710331 1039759 cri.go:89] found id: ""
	I0729 14:40:09.710374 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.710385 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:09.710397 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:09.710416 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:09.790014 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:09.790046 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:09.790064 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:09.870233 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:09.870278 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:09.910421 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:09.910454 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:09.962429 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:09.962474 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:12.476775 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:12.490804 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:12.490875 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:12.529435 1039759 cri.go:89] found id: ""
	I0729 14:40:12.529466 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.529478 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:12.529485 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:12.529551 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:12.564769 1039759 cri.go:89] found id: ""
	I0729 14:40:12.564806 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.564818 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:12.564826 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:12.564912 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:12.600253 1039759 cri.go:89] found id: ""
	I0729 14:40:12.600285 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.600296 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:12.600304 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:12.600367 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:12.636112 1039759 cri.go:89] found id: ""
	I0729 14:40:12.636146 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.636155 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:12.636161 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:12.636216 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:12.675592 1039759 cri.go:89] found id: ""
	I0729 14:40:12.675621 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.675631 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:12.675639 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:12.675711 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:12.711438 1039759 cri.go:89] found id: ""
	I0729 14:40:12.711469 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.711480 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:12.711488 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:12.711554 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:12.745497 1039759 cri.go:89] found id: ""
	I0729 14:40:12.745524 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.745533 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:12.745539 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:12.745598 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:12.778934 1039759 cri.go:89] found id: ""
	I0729 14:40:12.778966 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.778977 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:12.778991 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:12.779010 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:12.854721 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:12.854759 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:12.854780 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:12.932118 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:12.932158 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:12.974429 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:12.974461 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:13.030073 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:13.030108 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:12.795056 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:15.295125 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:11.674169 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:14.173503 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:16.175691 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:13.707268 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:15.708540 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:15.544245 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:15.559013 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:15.559090 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:15.594018 1039759 cri.go:89] found id: ""
	I0729 14:40:15.594051 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.594064 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:15.594076 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:15.594147 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:15.630734 1039759 cri.go:89] found id: ""
	I0729 14:40:15.630762 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.630771 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:15.630777 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:15.630832 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:15.666159 1039759 cri.go:89] found id: ""
	I0729 14:40:15.666191 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.666202 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:15.666210 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:15.666275 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:15.701058 1039759 cri.go:89] found id: ""
	I0729 14:40:15.701088 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.701098 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:15.701115 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:15.701170 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:15.737006 1039759 cri.go:89] found id: ""
	I0729 14:40:15.737040 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.737052 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:15.737066 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:15.737139 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:15.775678 1039759 cri.go:89] found id: ""
	I0729 14:40:15.775706 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.775718 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:15.775728 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:15.775795 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:15.812239 1039759 cri.go:89] found id: ""
	I0729 14:40:15.812268 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.812276 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:15.812283 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:15.812348 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:15.847653 1039759 cri.go:89] found id: ""
	I0729 14:40:15.847682 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.847693 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:15.847707 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:15.847725 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:15.903094 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:15.903137 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:15.917060 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:15.917093 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:15.993458 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:15.993481 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:15.993499 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:16.073369 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:16.073409 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:18.616107 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:18.630263 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:18.630340 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:18.668228 1039759 cri.go:89] found id: ""
	I0729 14:40:18.668261 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.668271 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:18.668279 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:18.668342 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:18.706863 1039759 cri.go:89] found id: ""
	I0729 14:40:18.706891 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.706902 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:18.706909 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:18.706978 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:18.739703 1039759 cri.go:89] found id: ""
	I0729 14:40:18.739728 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.739736 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:18.739742 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:18.739796 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:18.777025 1039759 cri.go:89] found id: ""
	I0729 14:40:18.777066 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.777077 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:18.777085 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:18.777158 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:18.814000 1039759 cri.go:89] found id: ""
	I0729 14:40:18.814026 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.814039 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:18.814051 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:18.814116 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:18.851027 1039759 cri.go:89] found id: ""
	I0729 14:40:18.851058 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.851069 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:18.851076 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:18.851151 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:17.796245 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:20.293964 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:18.673560 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:21.173099 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:18.207376 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:20.707629 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:18.903888 1039759 cri.go:89] found id: ""
	I0729 14:40:18.903920 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.903932 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:18.903941 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:18.904002 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:18.938756 1039759 cri.go:89] found id: ""
	I0729 14:40:18.938784 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.938791 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:18.938801 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:18.938814 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:18.988482 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:18.988520 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:19.002145 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:19.002177 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:19.085372 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:19.085397 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:19.085424 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:19.171294 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:19.171387 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:21.709578 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:21.722874 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:21.722941 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:21.768110 1039759 cri.go:89] found id: ""
	I0729 14:40:21.768139 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.768150 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:21.768156 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:21.768210 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:21.808853 1039759 cri.go:89] found id: ""
	I0729 14:40:21.808886 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.808897 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:21.808905 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:21.808974 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:21.843432 1039759 cri.go:89] found id: ""
	I0729 14:40:21.843472 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.843484 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:21.843493 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:21.843576 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:21.876497 1039759 cri.go:89] found id: ""
	I0729 14:40:21.876535 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.876547 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:21.876555 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:21.876633 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:21.911528 1039759 cri.go:89] found id: ""
	I0729 14:40:21.911556 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.911565 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:21.911571 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:21.911626 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:21.944514 1039759 cri.go:89] found id: ""
	I0729 14:40:21.944548 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.944560 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:21.944569 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:21.944641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:21.978113 1039759 cri.go:89] found id: ""
	I0729 14:40:21.978151 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.978162 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:21.978170 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:21.978233 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:22.012390 1039759 cri.go:89] found id: ""
	I0729 14:40:22.012438 1039759 logs.go:276] 0 containers: []
	W0729 14:40:22.012449 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:22.012461 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:22.012484 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:22.027921 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:22.027952 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:22.095087 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:22.095115 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:22.095132 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:22.178462 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:22.178495 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:22.220155 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:22.220188 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:22.794431 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:25.295391 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:23.174050 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:25.673437 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:22.708012 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:25.207491 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:24.771932 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:24.784764 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:24.784851 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:24.820445 1039759 cri.go:89] found id: ""
	I0729 14:40:24.820473 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.820485 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:24.820501 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:24.820569 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:24.854753 1039759 cri.go:89] found id: ""
	I0729 14:40:24.854786 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.854796 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:24.854802 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:24.854856 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:24.889200 1039759 cri.go:89] found id: ""
	I0729 14:40:24.889230 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.889242 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:24.889250 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:24.889312 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:24.932383 1039759 cri.go:89] found id: ""
	I0729 14:40:24.932435 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.932447 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:24.932454 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:24.932515 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:24.971830 1039759 cri.go:89] found id: ""
	I0729 14:40:24.971859 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.971871 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:24.971879 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:24.971936 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:25.014336 1039759 cri.go:89] found id: ""
	I0729 14:40:25.014374 1039759 logs.go:276] 0 containers: []
	W0729 14:40:25.014386 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:25.014397 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:25.014464 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:25.048131 1039759 cri.go:89] found id: ""
	I0729 14:40:25.048161 1039759 logs.go:276] 0 containers: []
	W0729 14:40:25.048171 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:25.048177 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:25.048232 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:25.089830 1039759 cri.go:89] found id: ""
	I0729 14:40:25.089866 1039759 logs.go:276] 0 containers: []
	W0729 14:40:25.089878 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:25.089893 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:25.089907 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:25.172078 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:25.172113 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:25.221629 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:25.221661 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:25.291761 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:25.291806 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:25.314521 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:25.314559 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:25.402738 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:27.903335 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:27.918335 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:27.918413 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:27.951929 1039759 cri.go:89] found id: ""
	I0729 14:40:27.951955 1039759 logs.go:276] 0 containers: []
	W0729 14:40:27.951966 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:27.951972 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:27.952029 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:27.986229 1039759 cri.go:89] found id: ""
	I0729 14:40:27.986266 1039759 logs.go:276] 0 containers: []
	W0729 14:40:27.986279 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:27.986287 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:27.986352 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:28.019467 1039759 cri.go:89] found id: ""
	I0729 14:40:28.019504 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.019517 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:28.019524 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:28.019590 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:28.053762 1039759 cri.go:89] found id: ""
	I0729 14:40:28.053790 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.053799 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:28.053806 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:28.053858 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:28.088947 1039759 cri.go:89] found id: ""
	I0729 14:40:28.088975 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.088989 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:28.088996 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:28.089070 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:28.130018 1039759 cri.go:89] found id: ""
	I0729 14:40:28.130052 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.130064 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:28.130072 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:28.130143 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:28.163682 1039759 cri.go:89] found id: ""
	I0729 14:40:28.163715 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.163725 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:28.163734 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:28.163802 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:28.199698 1039759 cri.go:89] found id: ""
	I0729 14:40:28.199732 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.199744 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:28.199757 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:28.199774 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:28.253735 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:28.253776 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:28.267786 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:28.267825 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:28.337218 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:28.337250 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:28.337265 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:28.419644 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:28.419688 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:27.793963 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:30.293775 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:28.172846 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:30.173544 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:27.707884 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:29.708174 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:32.206661 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:30.958707 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:30.972073 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:30.972146 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:31.016629 1039759 cri.go:89] found id: ""
	I0729 14:40:31.016662 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.016673 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:31.016681 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:31.016747 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:31.058891 1039759 cri.go:89] found id: ""
	I0729 14:40:31.058921 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.058930 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:31.058936 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:31.059004 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:31.096599 1039759 cri.go:89] found id: ""
	I0729 14:40:31.096633 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.096645 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:31.096654 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:31.096741 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:31.143525 1039759 cri.go:89] found id: ""
	I0729 14:40:31.143554 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.143562 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:31.143568 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:31.143628 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:31.180188 1039759 cri.go:89] found id: ""
	I0729 14:40:31.180220 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.180230 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:31.180239 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:31.180310 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:31.219995 1039759 cri.go:89] found id: ""
	I0729 14:40:31.220026 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.220037 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:31.220045 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:31.220108 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:31.254137 1039759 cri.go:89] found id: ""
	I0729 14:40:31.254182 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.254192 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:31.254200 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:31.254272 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:31.288065 1039759 cri.go:89] found id: ""
	I0729 14:40:31.288098 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.288109 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:31.288122 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:31.288137 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:31.341299 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:31.341338 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:31.355357 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:31.355387 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:31.427414 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:31.427439 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:31.427453 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:31.508372 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:31.508439 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:32.294256 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:34.295131 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:32.174315 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:34.674462 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:34.208183 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:36.707763 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:34.052770 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:34.066300 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:34.066366 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:34.104242 1039759 cri.go:89] found id: ""
	I0729 14:40:34.104278 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.104290 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:34.104299 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:34.104367 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:34.143092 1039759 cri.go:89] found id: ""
	I0729 14:40:34.143125 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.143137 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:34.143145 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:34.143216 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:34.177966 1039759 cri.go:89] found id: ""
	I0729 14:40:34.177993 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.178002 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:34.178011 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:34.178098 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:34.218325 1039759 cri.go:89] found id: ""
	I0729 14:40:34.218351 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.218361 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:34.218369 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:34.218437 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:34.256632 1039759 cri.go:89] found id: ""
	I0729 14:40:34.256665 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.256675 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:34.256683 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:34.256753 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:34.290713 1039759 cri.go:89] found id: ""
	I0729 14:40:34.290739 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.290747 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:34.290753 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:34.290816 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:34.331345 1039759 cri.go:89] found id: ""
	I0729 14:40:34.331378 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.331389 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:34.331397 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:34.331468 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:34.370184 1039759 cri.go:89] found id: ""
	I0729 14:40:34.370214 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.370226 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:34.370239 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:34.370256 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:34.448667 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:34.448709 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:34.492943 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:34.492974 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:34.548784 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:34.548827 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:34.565353 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:34.565389 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:34.639411 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:37.140039 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:37.153732 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:37.153806 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:37.189360 1039759 cri.go:89] found id: ""
	I0729 14:40:37.189389 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.189398 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:37.189404 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:37.189474 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:37.225790 1039759 cri.go:89] found id: ""
	I0729 14:40:37.225820 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.225831 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:37.225839 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:37.225914 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:37.261742 1039759 cri.go:89] found id: ""
	I0729 14:40:37.261772 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.261782 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:37.261791 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:37.261862 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:37.295791 1039759 cri.go:89] found id: ""
	I0729 14:40:37.295826 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.295835 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:37.295843 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:37.295908 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:37.331290 1039759 cri.go:89] found id: ""
	I0729 14:40:37.331324 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.331334 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:37.331343 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:37.331413 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:37.366150 1039759 cri.go:89] found id: ""
	I0729 14:40:37.366183 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.366195 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:37.366203 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:37.366273 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:37.400983 1039759 cri.go:89] found id: ""
	I0729 14:40:37.401019 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.401030 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:37.401038 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:37.401110 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:37.435333 1039759 cri.go:89] found id: ""
	I0729 14:40:37.435368 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.435379 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:37.435391 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:37.435407 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:37.488020 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:37.488057 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:37.501543 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:37.501573 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:37.576006 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:37.576033 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:37.576050 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:37.658600 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:37.658641 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:36.794615 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:38.795414 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:37.175174 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:39.674361 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:39.207946 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:41.707724 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:40.200763 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:40.216048 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:40.216121 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:40.253969 1039759 cri.go:89] found id: ""
	I0729 14:40:40.253996 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.254005 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:40.254012 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:40.254078 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:40.289557 1039759 cri.go:89] found id: ""
	I0729 14:40:40.289595 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.289608 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:40.289616 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:40.289698 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:40.329756 1039759 cri.go:89] found id: ""
	I0729 14:40:40.329799 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.329823 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:40.329833 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:40.329906 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:40.365281 1039759 cri.go:89] found id: ""
	I0729 14:40:40.365315 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.365327 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:40.365335 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:40.365403 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:40.401300 1039759 cri.go:89] found id: ""
	I0729 14:40:40.401327 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.401336 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:40.401342 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:40.401398 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:40.435679 1039759 cri.go:89] found id: ""
	I0729 14:40:40.435710 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.435719 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:40.435726 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:40.435781 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:40.475825 1039759 cri.go:89] found id: ""
	I0729 14:40:40.475851 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.475859 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:40.475866 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:40.475926 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:40.512153 1039759 cri.go:89] found id: ""
	I0729 14:40:40.512184 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.512191 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:40.512202 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:40.512215 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:40.563983 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:40.564022 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:40.578823 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:40.578853 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:40.650282 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:40.650311 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:40.650328 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:40.734933 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:40.734980 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:43.280095 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:43.294284 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:43.294361 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:43.328862 1039759 cri.go:89] found id: ""
	I0729 14:40:43.328890 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.328899 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:43.328905 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:43.328971 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:43.366321 1039759 cri.go:89] found id: ""
	I0729 14:40:43.366364 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.366376 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:43.366384 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:43.366459 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:43.400189 1039759 cri.go:89] found id: ""
	I0729 14:40:43.400220 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.400229 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:43.400235 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:43.400299 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:43.438521 1039759 cri.go:89] found id: ""
	I0729 14:40:43.438562 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.438582 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:43.438594 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:43.438665 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:43.473931 1039759 cri.go:89] found id: ""
	I0729 14:40:43.473958 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.473966 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:43.473972 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:43.474035 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:43.511460 1039759 cri.go:89] found id: ""
	I0729 14:40:43.511490 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.511497 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:43.511506 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:43.511563 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:43.547255 1039759 cri.go:89] found id: ""
	I0729 14:40:43.547290 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.547301 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:43.547309 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:43.547375 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:43.582384 1039759 cri.go:89] found id: ""
	I0729 14:40:43.582418 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.582428 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:43.582441 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:43.582459 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:43.595747 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:43.595780 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:43.665389 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:43.665413 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:43.665427 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:43.752669 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:43.752712 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:43.797239 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:43.797272 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:41.294242 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:43.294985 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:45.794449 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:42.173495 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:44.173830 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:44.207593 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:46.706855 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:46.352841 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:46.368204 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:46.368278 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:46.406661 1039759 cri.go:89] found id: ""
	I0729 14:40:46.406687 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.406695 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:46.406701 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:46.406761 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:46.443728 1039759 cri.go:89] found id: ""
	I0729 14:40:46.443760 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.443771 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:46.443778 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:46.443845 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:46.477632 1039759 cri.go:89] found id: ""
	I0729 14:40:46.477666 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.477677 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:46.477686 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:46.477754 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:46.512510 1039759 cri.go:89] found id: ""
	I0729 14:40:46.512538 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.512549 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:46.512557 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:46.512629 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:46.550803 1039759 cri.go:89] found id: ""
	I0729 14:40:46.550834 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.550843 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:46.550848 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:46.550914 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:46.591610 1039759 cri.go:89] found id: ""
	I0729 14:40:46.591640 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.591652 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:46.591661 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:46.591723 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:46.631090 1039759 cri.go:89] found id: ""
	I0729 14:40:46.631122 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.631132 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:46.631139 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:46.631199 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:46.670510 1039759 cri.go:89] found id: ""
	I0729 14:40:46.670542 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.670554 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:46.670573 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:46.670590 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:46.725560 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:46.725594 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:46.739348 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:46.739372 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:46.812850 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:46.812874 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:46.812892 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:46.892922 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:46.892964 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:47.795538 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:50.293685 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:46.674514 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:49.174577 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:48.708243 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:51.207168 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:49.438741 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:49.452505 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:49.452588 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:49.487294 1039759 cri.go:89] found id: ""
	I0729 14:40:49.487323 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.487331 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:49.487340 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:49.487407 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:49.521783 1039759 cri.go:89] found id: ""
	I0729 14:40:49.521816 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.521828 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:49.521836 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:49.521901 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:49.557039 1039759 cri.go:89] found id: ""
	I0729 14:40:49.557075 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.557086 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:49.557094 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:49.557162 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:49.590431 1039759 cri.go:89] found id: ""
	I0729 14:40:49.590462 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.590474 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:49.590494 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:49.590574 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:49.626230 1039759 cri.go:89] found id: ""
	I0729 14:40:49.626260 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.626268 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:49.626274 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:49.626339 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:49.662030 1039759 cri.go:89] found id: ""
	I0729 14:40:49.662060 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.662068 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:49.662075 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:49.662130 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:49.699988 1039759 cri.go:89] found id: ""
	I0729 14:40:49.700019 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.700035 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:49.700076 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:49.700144 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:49.736830 1039759 cri.go:89] found id: ""
	I0729 14:40:49.736864 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.736873 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:49.736882 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:49.736895 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:49.775670 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:49.775703 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:49.830820 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:49.830853 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:49.846374 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:49.846407 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:49.917475 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:49.917502 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:49.917520 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:52.499291 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:52.513571 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:52.513641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:52.547437 1039759 cri.go:89] found id: ""
	I0729 14:40:52.547474 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.547487 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:52.547495 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:52.547559 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:52.587664 1039759 cri.go:89] found id: ""
	I0729 14:40:52.587705 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.587718 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:52.587726 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:52.587799 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:52.630642 1039759 cri.go:89] found id: ""
	I0729 14:40:52.630670 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.630678 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:52.630684 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:52.630740 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:52.665978 1039759 cri.go:89] found id: ""
	I0729 14:40:52.666010 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.666022 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:52.666030 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:52.666103 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:52.701111 1039759 cri.go:89] found id: ""
	I0729 14:40:52.701140 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.701148 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:52.701155 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:52.701211 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:52.744219 1039759 cri.go:89] found id: ""
	I0729 14:40:52.744247 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.744257 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:52.744265 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:52.744329 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:52.781081 1039759 cri.go:89] found id: ""
	I0729 14:40:52.781113 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.781122 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:52.781128 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:52.781198 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:52.817938 1039759 cri.go:89] found id: ""
	I0729 14:40:52.817974 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.817985 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:52.817999 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:52.818016 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:52.895387 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:52.895416 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:52.895433 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:52.976313 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:52.976356 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:53.013814 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:53.013852 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:53.065901 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:53.065937 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:52.798083 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:55.293459 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:51.674103 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:54.174456 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:53.208082 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:55.707719 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:55.580590 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:55.595023 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:55.595108 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:55.631449 1039759 cri.go:89] found id: ""
	I0729 14:40:55.631479 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.631487 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:55.631494 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:55.631551 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:55.666245 1039759 cri.go:89] found id: ""
	I0729 14:40:55.666274 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.666283 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:55.666296 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:55.666364 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:55.706582 1039759 cri.go:89] found id: ""
	I0729 14:40:55.706611 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.706621 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:55.706629 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:55.706696 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:55.741930 1039759 cri.go:89] found id: ""
	I0729 14:40:55.741962 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.741973 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:55.741990 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:55.742058 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:55.781440 1039759 cri.go:89] found id: ""
	I0729 14:40:55.781475 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.781486 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:55.781494 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:55.781599 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:55.825329 1039759 cri.go:89] found id: ""
	I0729 14:40:55.825366 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.825377 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:55.825387 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:55.825466 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:55.860834 1039759 cri.go:89] found id: ""
	I0729 14:40:55.860866 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.860878 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:55.860886 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:55.860950 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:55.895460 1039759 cri.go:89] found id: ""
	I0729 14:40:55.895492 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.895502 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:55.895514 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:55.895531 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:55.951739 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:55.951781 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:55.965760 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:55.965792 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:56.044422 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:56.044458 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:56.044477 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:56.123669 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:56.123714 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:58.668279 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:58.682912 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:58.682974 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:58.718757 1039759 cri.go:89] found id: ""
	I0729 14:40:58.718787 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.718798 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:58.718807 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:58.718861 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:58.756986 1039759 cri.go:89] found id: ""
	I0729 14:40:58.757015 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.757025 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:58.757031 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:58.757092 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:58.797572 1039759 cri.go:89] found id: ""
	I0729 14:40:58.797600 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.797611 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:58.797620 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:58.797689 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:58.839410 1039759 cri.go:89] found id: ""
	I0729 14:40:58.839442 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.839453 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:58.839461 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:58.839523 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:57.293935 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:59.294805 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:56.673078 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:58.674177 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:01.173709 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:57.708051 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:00.207822 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:02.208128 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:58.874477 1039759 cri.go:89] found id: ""
	I0729 14:40:58.874508 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.874519 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:58.874528 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:58.874602 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:58.910248 1039759 cri.go:89] found id: ""
	I0729 14:40:58.910281 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.910296 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:58.910307 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:58.910368 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:58.944845 1039759 cri.go:89] found id: ""
	I0729 14:40:58.944879 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.944890 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:58.944896 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:58.944955 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:58.978818 1039759 cri.go:89] found id: ""
	I0729 14:40:58.978854 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.978867 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:58.978879 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:58.978898 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:59.018961 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:59.018993 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:59.069883 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:59.069920 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:59.083277 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:59.083304 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:59.159470 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:59.159494 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:59.159511 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:01.746915 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:01.759883 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:01.759949 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:01.796563 1039759 cri.go:89] found id: ""
	I0729 14:41:01.796589 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.796602 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:01.796608 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:01.796691 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:01.831464 1039759 cri.go:89] found id: ""
	I0729 14:41:01.831499 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.831511 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:01.831520 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:01.831586 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:01.868633 1039759 cri.go:89] found id: ""
	I0729 14:41:01.868660 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.868668 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:01.868674 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:01.868732 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:01.903154 1039759 cri.go:89] found id: ""
	I0729 14:41:01.903183 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.903194 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:01.903202 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:01.903272 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:01.938256 1039759 cri.go:89] found id: ""
	I0729 14:41:01.938292 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.938304 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:01.938312 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:01.938384 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:01.978117 1039759 cri.go:89] found id: ""
	I0729 14:41:01.978147 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.978159 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:01.978168 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:01.978242 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:02.014061 1039759 cri.go:89] found id: ""
	I0729 14:41:02.014089 1039759 logs.go:276] 0 containers: []
	W0729 14:41:02.014100 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:02.014108 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:02.014176 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:02.050133 1039759 cri.go:89] found id: ""
	I0729 14:41:02.050165 1039759 logs.go:276] 0 containers: []
	W0729 14:41:02.050177 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:02.050189 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:02.050206 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:02.101188 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:02.101253 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:02.114343 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:02.114369 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:02.190309 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:02.190338 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:02.190354 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:02.266895 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:02.266939 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:01.794976 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:04.295199 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:03.176713 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:05.673543 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:04.708032 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:07.207702 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:04.809474 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:04.824652 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:04.824725 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:04.858442 1039759 cri.go:89] found id: ""
	I0729 14:41:04.858474 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.858483 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:04.858490 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:04.858542 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:04.893199 1039759 cri.go:89] found id: ""
	I0729 14:41:04.893229 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.893237 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:04.893243 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:04.893297 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:04.929480 1039759 cri.go:89] found id: ""
	I0729 14:41:04.929512 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.929524 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:04.929532 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:04.929601 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:04.965097 1039759 cri.go:89] found id: ""
	I0729 14:41:04.965127 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.965139 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:04.965147 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:04.965228 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:05.003419 1039759 cri.go:89] found id: ""
	I0729 14:41:05.003449 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.003460 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:05.003467 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:05.003557 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:05.037408 1039759 cri.go:89] found id: ""
	I0729 14:41:05.037439 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.037451 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:05.037458 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:05.037527 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:05.072909 1039759 cri.go:89] found id: ""
	I0729 14:41:05.072942 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.072953 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:05.072961 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:05.073034 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:05.123731 1039759 cri.go:89] found id: ""
	I0729 14:41:05.123764 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.123776 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:05.123787 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:05.123802 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:05.188687 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:05.188732 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:05.204119 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:05.204160 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:05.294702 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:05.294732 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:05.294750 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:05.377412 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:05.377456 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:07.923437 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:07.937633 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:07.937711 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:07.976813 1039759 cri.go:89] found id: ""
	I0729 14:41:07.976850 1039759 logs.go:276] 0 containers: []
	W0729 14:41:07.976861 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:07.976872 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:07.976946 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:08.013051 1039759 cri.go:89] found id: ""
	I0729 14:41:08.013089 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.013100 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:08.013109 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:08.013177 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:08.047372 1039759 cri.go:89] found id: ""
	I0729 14:41:08.047404 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.047413 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:08.047420 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:08.047477 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:08.080555 1039759 cri.go:89] found id: ""
	I0729 14:41:08.080594 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.080607 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:08.080615 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:08.080684 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:08.117054 1039759 cri.go:89] found id: ""
	I0729 14:41:08.117087 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.117098 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:08.117106 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:08.117175 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:08.152270 1039759 cri.go:89] found id: ""
	I0729 14:41:08.152295 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.152303 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:08.152309 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:08.152373 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:08.188804 1039759 cri.go:89] found id: ""
	I0729 14:41:08.188830 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.188842 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:08.188848 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:08.188903 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:08.225101 1039759 cri.go:89] found id: ""
	I0729 14:41:08.225139 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.225151 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:08.225164 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:08.225182 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:08.278721 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:08.278759 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:08.293417 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:08.293453 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:08.371802 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:08.371825 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:08.371843 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:08.452233 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:08.452274 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:06.795598 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:09.294006 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:08.175147 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:10.673937 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:09.707777 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:12.208180 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:10.993379 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:11.007599 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:11.007668 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:11.045603 1039759 cri.go:89] found id: ""
	I0729 14:41:11.045652 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.045675 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:11.045683 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:11.045746 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:11.079682 1039759 cri.go:89] found id: ""
	I0729 14:41:11.079711 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.079722 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:11.079730 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:11.079797 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:11.122138 1039759 cri.go:89] found id: ""
	I0729 14:41:11.122167 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.122180 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:11.122185 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:11.122249 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:11.157416 1039759 cri.go:89] found id: ""
	I0729 14:41:11.157444 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.157452 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:11.157458 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:11.157514 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:11.198589 1039759 cri.go:89] found id: ""
	I0729 14:41:11.198631 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.198643 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:11.198652 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:11.198725 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:11.238329 1039759 cri.go:89] found id: ""
	I0729 14:41:11.238360 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.238369 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:11.238376 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:11.238442 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:11.273283 1039759 cri.go:89] found id: ""
	I0729 14:41:11.273313 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.273322 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:11.273328 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:11.273382 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:11.313927 1039759 cri.go:89] found id: ""
	I0729 14:41:11.313972 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.313984 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:11.313997 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:11.314014 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:11.366507 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:11.366546 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:11.380529 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:11.380566 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:11.451839 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:11.451862 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:11.451882 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:11.537109 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:11.537150 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:11.294967 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:13.793738 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:13.173482 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:15.673025 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:14.706708 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:16.707135 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:14.104794 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:14.117474 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:14.117541 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:14.154117 1039759 cri.go:89] found id: ""
	I0729 14:41:14.154151 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.154163 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:14.154171 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:14.154236 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:14.195762 1039759 cri.go:89] found id: ""
	I0729 14:41:14.195793 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.195804 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:14.195812 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:14.195875 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:14.231434 1039759 cri.go:89] found id: ""
	I0729 14:41:14.231460 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.231467 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:14.231474 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:14.231523 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:14.264802 1039759 cri.go:89] found id: ""
	I0729 14:41:14.264839 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.264851 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:14.264859 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:14.264932 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:14.300162 1039759 cri.go:89] found id: ""
	I0729 14:41:14.300184 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.300194 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:14.300202 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:14.300262 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:14.335351 1039759 cri.go:89] found id: ""
	I0729 14:41:14.335385 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.335396 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:14.335404 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:14.335468 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:14.370064 1039759 cri.go:89] found id: ""
	I0729 14:41:14.370096 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.370107 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:14.370115 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:14.370184 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:14.406506 1039759 cri.go:89] found id: ""
	I0729 14:41:14.406538 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.406549 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:14.406562 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:14.406579 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:14.445641 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:14.445681 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:14.496132 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:14.496165 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:14.509732 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:14.509767 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:14.581519 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:14.581541 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:14.581558 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:17.164487 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:17.178359 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:17.178447 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:17.213780 1039759 cri.go:89] found id: ""
	I0729 14:41:17.213869 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.213887 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:17.213896 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:17.213966 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:17.251006 1039759 cri.go:89] found id: ""
	I0729 14:41:17.251045 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.251056 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:17.251063 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:17.251135 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:17.306624 1039759 cri.go:89] found id: ""
	I0729 14:41:17.306654 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.306683 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:17.306691 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:17.306775 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:17.358882 1039759 cri.go:89] found id: ""
	I0729 14:41:17.358915 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.358927 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:17.358935 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:17.359008 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:17.408592 1039759 cri.go:89] found id: ""
	I0729 14:41:17.408620 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.408636 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:17.408642 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:17.408705 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:17.445201 1039759 cri.go:89] found id: ""
	I0729 14:41:17.445228 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.445236 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:17.445242 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:17.445305 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:17.477441 1039759 cri.go:89] found id: ""
	I0729 14:41:17.477483 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.477511 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:17.477518 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:17.477591 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:17.509148 1039759 cri.go:89] found id: ""
	I0729 14:41:17.509179 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.509190 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:17.509203 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:17.509220 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:17.559784 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:17.559823 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:17.574163 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:17.574199 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:17.644249 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:17.644277 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:17.644294 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:17.720652 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:17.720688 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:16.293977 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:18.793489 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:20.793760 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:17.674099 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:20.173742 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:18.707238 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:21.209948 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:20.261591 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:20.274649 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:20.274731 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:20.311561 1039759 cri.go:89] found id: ""
	I0729 14:41:20.311591 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.311600 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:20.311606 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:20.311668 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:20.350267 1039759 cri.go:89] found id: ""
	I0729 14:41:20.350300 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.350313 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:20.350322 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:20.350379 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:20.384183 1039759 cri.go:89] found id: ""
	I0729 14:41:20.384213 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.384220 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:20.384227 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:20.384288 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:20.422330 1039759 cri.go:89] found id: ""
	I0729 14:41:20.422358 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.422367 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:20.422373 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:20.422442 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:20.465537 1039759 cri.go:89] found id: ""
	I0729 14:41:20.465568 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.465577 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:20.465586 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:20.465663 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:20.507661 1039759 cri.go:89] found id: ""
	I0729 14:41:20.507691 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.507701 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:20.507710 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:20.507774 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:20.545830 1039759 cri.go:89] found id: ""
	I0729 14:41:20.545857 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.545866 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:20.545872 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:20.545936 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:20.586311 1039759 cri.go:89] found id: ""
	I0729 14:41:20.586345 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.586354 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:20.586364 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:20.586379 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:20.635183 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:20.635224 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:20.649660 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:20.649701 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:20.729588 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:20.729613 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:20.729632 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:20.811565 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:20.811605 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:23.354318 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:23.367784 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:23.367862 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:23.401929 1039759 cri.go:89] found id: ""
	I0729 14:41:23.401956 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.401965 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:23.401970 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:23.402033 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:23.437130 1039759 cri.go:89] found id: ""
	I0729 14:41:23.437161 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.437185 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:23.437205 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:23.437267 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:23.474029 1039759 cri.go:89] found id: ""
	I0729 14:41:23.474066 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.474078 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:23.474087 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:23.474159 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:23.506678 1039759 cri.go:89] found id: ""
	I0729 14:41:23.506714 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.506725 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:23.506732 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:23.506791 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:23.541578 1039759 cri.go:89] found id: ""
	I0729 14:41:23.541618 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.541628 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:23.541636 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:23.541709 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:23.575852 1039759 cri.go:89] found id: ""
	I0729 14:41:23.575883 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.575891 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:23.575898 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:23.575955 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:23.610611 1039759 cri.go:89] found id: ""
	I0729 14:41:23.610638 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.610646 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:23.610653 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:23.610717 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:23.650403 1039759 cri.go:89] found id: ""
	I0729 14:41:23.650429 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.650438 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:23.650448 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:23.650460 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:23.701856 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:23.701899 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:23.716925 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:23.716958 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:23.790678 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:23.790699 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:23.790717 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:23.873204 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:23.873242 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:22.794021 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:25.294289 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:22.173787 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:24.673139 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:23.708892 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:26.207121 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:26.414319 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:26.428069 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:26.428152 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:26.462538 1039759 cri.go:89] found id: ""
	I0729 14:41:26.462578 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.462590 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:26.462599 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:26.462687 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:26.496461 1039759 cri.go:89] found id: ""
	I0729 14:41:26.496501 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.496513 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:26.496521 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:26.496593 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:26.534152 1039759 cri.go:89] found id: ""
	I0729 14:41:26.534190 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.534203 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:26.534210 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:26.534273 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:26.572986 1039759 cri.go:89] found id: ""
	I0729 14:41:26.573016 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.573024 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:26.573030 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:26.573097 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:26.607330 1039759 cri.go:89] found id: ""
	I0729 14:41:26.607359 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.607370 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:26.607378 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:26.607445 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:26.643023 1039759 cri.go:89] found id: ""
	I0729 14:41:26.643056 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.643067 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:26.643078 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:26.643145 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:26.679820 1039759 cri.go:89] found id: ""
	I0729 14:41:26.679846 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.679856 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:26.679865 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:26.679930 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:26.716433 1039759 cri.go:89] found id: ""
	I0729 14:41:26.716462 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.716470 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:26.716480 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:26.716494 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:26.794508 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:26.794529 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:26.794542 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:26.876663 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:26.876701 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:26.917309 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:26.917343 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:26.969397 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:26.969436 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:27.294711 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:29.793946 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:26.679220 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:29.173259 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:31.175213 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:28.207613 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:30.707297 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:29.483935 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:29.497502 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:29.497585 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:29.532671 1039759 cri.go:89] found id: ""
	I0729 14:41:29.532698 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.532712 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:29.532719 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:29.532784 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:29.568058 1039759 cri.go:89] found id: ""
	I0729 14:41:29.568085 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.568096 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:29.568103 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:29.568176 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:29.601173 1039759 cri.go:89] found id: ""
	I0729 14:41:29.601206 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.601216 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:29.601225 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:29.601284 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:29.634333 1039759 cri.go:89] found id: ""
	I0729 14:41:29.634372 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.634384 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:29.634393 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:29.634460 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:29.669669 1039759 cri.go:89] found id: ""
	I0729 14:41:29.669698 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.669706 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:29.669712 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:29.669777 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:29.702847 1039759 cri.go:89] found id: ""
	I0729 14:41:29.702876 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.702886 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:29.702894 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:29.702960 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:29.740713 1039759 cri.go:89] found id: ""
	I0729 14:41:29.740743 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.740754 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:29.740762 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:29.740846 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:29.777795 1039759 cri.go:89] found id: ""
	I0729 14:41:29.777829 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.777841 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:29.777853 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:29.777869 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:29.858713 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:29.858758 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:29.896873 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:29.896914 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:29.946905 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:29.946945 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:29.960136 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:29.960170 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:30.035951 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:32.536130 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:32.549431 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:32.549501 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:32.586069 1039759 cri.go:89] found id: ""
	I0729 14:41:32.586098 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.586117 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:32.586125 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:32.586183 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:32.623094 1039759 cri.go:89] found id: ""
	I0729 14:41:32.623123 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.623132 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:32.623138 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:32.623205 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:32.658370 1039759 cri.go:89] found id: ""
	I0729 14:41:32.658406 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.658418 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:32.658426 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:32.658492 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:32.696436 1039759 cri.go:89] found id: ""
	I0729 14:41:32.696469 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.696478 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:32.696484 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:32.696551 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:32.731306 1039759 cri.go:89] found id: ""
	I0729 14:41:32.731340 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.731352 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:32.731361 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:32.731431 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:32.767049 1039759 cri.go:89] found id: ""
	I0729 14:41:32.767087 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.767098 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:32.767106 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:32.767179 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:32.805094 1039759 cri.go:89] found id: ""
	I0729 14:41:32.805126 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.805138 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:32.805147 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:32.805223 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:32.840088 1039759 cri.go:89] found id: ""
	I0729 14:41:32.840116 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.840125 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:32.840137 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:32.840155 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:32.854065 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:32.854095 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:32.921447 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:32.921477 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:32.921493 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:33.005086 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:33.005129 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:33.042555 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:33.042617 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:31.795000 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:34.293349 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:33.673734 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:35.674275 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:32.707849 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:35.210238 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:35.593173 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:35.605965 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:35.606031 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:35.639315 1039759 cri.go:89] found id: ""
	I0729 14:41:35.639355 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.639367 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:35.639374 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:35.639466 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:35.678657 1039759 cri.go:89] found id: ""
	I0729 14:41:35.678686 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.678695 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:35.678700 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:35.678764 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:35.714108 1039759 cri.go:89] found id: ""
	I0729 14:41:35.714136 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.714147 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:35.714155 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:35.714220 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:35.748793 1039759 cri.go:89] found id: ""
	I0729 14:41:35.748820 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.748831 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:35.748837 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:35.748891 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:35.788853 1039759 cri.go:89] found id: ""
	I0729 14:41:35.788884 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.788895 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:35.788903 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:35.788971 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:35.825032 1039759 cri.go:89] found id: ""
	I0729 14:41:35.825059 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.825067 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:35.825074 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:35.825126 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:35.859990 1039759 cri.go:89] found id: ""
	I0729 14:41:35.860022 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.860033 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:35.860041 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:35.860131 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:35.894318 1039759 cri.go:89] found id: ""
	I0729 14:41:35.894352 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.894364 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:35.894377 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:35.894393 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:35.907591 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:35.907617 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:35.975000 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:35.975023 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:35.975040 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:36.056188 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:36.056226 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:36.094569 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:36.094606 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:38.648685 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:38.661546 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:38.661612 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:38.698658 1039759 cri.go:89] found id: ""
	I0729 14:41:38.698692 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.698704 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:38.698711 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:38.698797 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:38.731239 1039759 cri.go:89] found id: ""
	I0729 14:41:38.731274 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.731282 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:38.731288 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:38.731341 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:38.766549 1039759 cri.go:89] found id: ""
	I0729 14:41:38.766583 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.766594 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:38.766602 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:38.766663 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:38.803347 1039759 cri.go:89] found id: ""
	I0729 14:41:38.803374 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.803385 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:38.803393 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:38.803467 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:38.840327 1039759 cri.go:89] found id: ""
	I0729 14:41:38.840363 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.840374 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:38.840384 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:38.840480 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:38.874181 1039759 cri.go:89] found id: ""
	I0729 14:41:38.874211 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.874219 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:38.874225 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:38.874293 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:36.297301 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:38.794975 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:38.173718 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:40.675880 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:37.707171 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:39.709125 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:42.206569 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:38.908642 1039759 cri.go:89] found id: ""
	I0729 14:41:38.908674 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.908686 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:38.908694 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:38.908762 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:38.945081 1039759 cri.go:89] found id: ""
	I0729 14:41:38.945107 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.945116 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:38.945126 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:38.945140 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:38.999792 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:38.999826 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:39.013396 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:39.013421 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:39.077975 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:39.077998 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:39.078016 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:39.169606 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:39.169654 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:41.716258 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:41.730508 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:41.730579 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:41.766457 1039759 cri.go:89] found id: ""
	I0729 14:41:41.766490 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.766498 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:41.766505 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:41.766571 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:41.801073 1039759 cri.go:89] found id: ""
	I0729 14:41:41.801099 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.801109 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:41.801117 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:41.801178 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:41.836962 1039759 cri.go:89] found id: ""
	I0729 14:41:41.836986 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.836997 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:41.837005 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:41.837072 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:41.870169 1039759 cri.go:89] found id: ""
	I0729 14:41:41.870195 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.870205 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:41.870213 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:41.870274 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:41.902298 1039759 cri.go:89] found id: ""
	I0729 14:41:41.902323 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.902331 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:41.902337 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:41.902387 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:41.935394 1039759 cri.go:89] found id: ""
	I0729 14:41:41.935429 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.935441 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:41.935449 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:41.935513 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:41.972397 1039759 cri.go:89] found id: ""
	I0729 14:41:41.972437 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.972448 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:41.972456 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:41.972525 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:42.006477 1039759 cri.go:89] found id: ""
	I0729 14:41:42.006503 1039759 logs.go:276] 0 containers: []
	W0729 14:41:42.006513 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:42.006526 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:42.006540 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:42.053853 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:42.053886 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:42.067143 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:42.067172 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:42.135406 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:42.135432 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:42.135449 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:42.212571 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:42.212603 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:41.293241 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:43.294160 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:45.793697 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:43.173087 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:45.174327 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:44.206854 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:46.707167 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:44.751283 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:44.764600 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:44.764688 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:44.800821 1039759 cri.go:89] found id: ""
	I0729 14:41:44.800850 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.800857 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:44.800863 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:44.800924 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:44.834638 1039759 cri.go:89] found id: ""
	I0729 14:41:44.834670 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.834680 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:44.834686 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:44.834744 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:44.870198 1039759 cri.go:89] found id: ""
	I0729 14:41:44.870225 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.870237 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:44.870245 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:44.870312 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:44.904588 1039759 cri.go:89] found id: ""
	I0729 14:41:44.904620 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.904631 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:44.904639 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:44.904713 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:44.939442 1039759 cri.go:89] found id: ""
	I0729 14:41:44.939467 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.939474 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:44.939480 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:44.939541 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:44.972771 1039759 cri.go:89] found id: ""
	I0729 14:41:44.972799 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.972808 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:44.972815 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:44.972888 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:45.007513 1039759 cri.go:89] found id: ""
	I0729 14:41:45.007540 1039759 logs.go:276] 0 containers: []
	W0729 14:41:45.007549 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:45.007557 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:45.007626 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:45.038752 1039759 cri.go:89] found id: ""
	I0729 14:41:45.038778 1039759 logs.go:276] 0 containers: []
	W0729 14:41:45.038787 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:45.038797 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:45.038821 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:45.089807 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:45.089838 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:45.103188 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:45.103221 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:45.174509 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:45.174532 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:45.174554 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:45.255288 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:45.255327 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:47.799207 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:47.814781 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:47.814866 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:47.855111 1039759 cri.go:89] found id: ""
	I0729 14:41:47.855143 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.855156 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:47.855164 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:47.855230 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:47.892542 1039759 cri.go:89] found id: ""
	I0729 14:41:47.892577 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.892589 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:47.892603 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:47.892674 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:47.933408 1039759 cri.go:89] found id: ""
	I0729 14:41:47.933439 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.933451 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:47.933458 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:47.933531 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:47.970397 1039759 cri.go:89] found id: ""
	I0729 14:41:47.970427 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.970439 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:47.970447 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:47.970514 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:48.006852 1039759 cri.go:89] found id: ""
	I0729 14:41:48.006880 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.006891 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:48.006899 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:48.006967 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:48.046766 1039759 cri.go:89] found id: ""
	I0729 14:41:48.046799 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.046811 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:48.046820 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:48.046893 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:48.084354 1039759 cri.go:89] found id: ""
	I0729 14:41:48.084380 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.084387 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:48.084393 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:48.084468 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:48.121526 1039759 cri.go:89] found id: ""
	I0729 14:41:48.121559 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.121571 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:48.121582 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:48.121606 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:48.136753 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:48.136784 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:48.206914 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:48.206942 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:48.206958 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:48.283843 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:48.283882 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:48.325845 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:48.325878 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:47.794096 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:50.295275 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:47.182903 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:49.672827 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:49.206572 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:51.206900 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:50.881346 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:50.894098 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:50.894177 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:50.927345 1039759 cri.go:89] found id: ""
	I0729 14:41:50.927375 1039759 logs.go:276] 0 containers: []
	W0729 14:41:50.927386 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:50.927399 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:50.927466 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:50.962700 1039759 cri.go:89] found id: ""
	I0729 14:41:50.962726 1039759 logs.go:276] 0 containers: []
	W0729 14:41:50.962734 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:50.962740 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:50.962804 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:50.997299 1039759 cri.go:89] found id: ""
	I0729 14:41:50.997334 1039759 logs.go:276] 0 containers: []
	W0729 14:41:50.997346 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:50.997354 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:50.997419 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:51.030157 1039759 cri.go:89] found id: ""
	I0729 14:41:51.030190 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.030202 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:51.030211 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:51.030288 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:51.063123 1039759 cri.go:89] found id: ""
	I0729 14:41:51.063151 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.063162 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:51.063170 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:51.063237 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:51.096772 1039759 cri.go:89] found id: ""
	I0729 14:41:51.096819 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.096830 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:51.096838 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:51.096912 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:51.131976 1039759 cri.go:89] found id: ""
	I0729 14:41:51.132004 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.132014 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:51.132022 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:51.132095 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:51.167560 1039759 cri.go:89] found id: ""
	I0729 14:41:51.167599 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.167610 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:51.167622 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:51.167640 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:51.229416 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:51.229455 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:51.243576 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:51.243604 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:51.311103 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:51.311123 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:51.311139 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:51.396369 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:51.396432 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:52.793981 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:55.294172 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:51.673945 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:54.173681 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:56.174098 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:53.207656 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:55.709310 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:53.942329 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:53.955960 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:53.956027 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:53.988039 1039759 cri.go:89] found id: ""
	I0729 14:41:53.988074 1039759 logs.go:276] 0 containers: []
	W0729 14:41:53.988085 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:53.988094 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:53.988162 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:54.020948 1039759 cri.go:89] found id: ""
	I0729 14:41:54.020981 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.020992 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:54.020999 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:54.021067 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:54.053716 1039759 cri.go:89] found id: ""
	I0729 14:41:54.053744 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.053752 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:54.053759 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:54.053811 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:54.092348 1039759 cri.go:89] found id: ""
	I0729 14:41:54.092378 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.092390 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:54.092398 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:54.092471 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:54.126114 1039759 cri.go:89] found id: ""
	I0729 14:41:54.126176 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.126189 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:54.126199 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:54.126316 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:54.162125 1039759 cri.go:89] found id: ""
	I0729 14:41:54.162157 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.162167 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:54.162174 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:54.162241 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:54.202407 1039759 cri.go:89] found id: ""
	I0729 14:41:54.202439 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.202448 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:54.202456 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:54.202522 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:54.238650 1039759 cri.go:89] found id: ""
	I0729 14:41:54.238684 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.238695 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:54.238704 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:54.238718 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:54.291200 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:54.291243 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:54.306381 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:54.306415 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:54.371355 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:54.371384 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:54.371399 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:54.455200 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:54.455237 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:56.994689 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:57.007893 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:57.007958 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:57.041775 1039759 cri.go:89] found id: ""
	I0729 14:41:57.041808 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.041820 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:57.041828 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:57.041894 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:57.075409 1039759 cri.go:89] found id: ""
	I0729 14:41:57.075442 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.075454 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:57.075462 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:57.075524 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:57.120963 1039759 cri.go:89] found id: ""
	I0729 14:41:57.121000 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.121011 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:57.121019 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:57.121088 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:57.164882 1039759 cri.go:89] found id: ""
	I0729 14:41:57.164912 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.164923 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:57.164932 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:57.165001 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:57.198511 1039759 cri.go:89] found id: ""
	I0729 14:41:57.198537 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.198545 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:57.198550 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:57.198604 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:57.238516 1039759 cri.go:89] found id: ""
	I0729 14:41:57.238544 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.238552 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:57.238559 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:57.238622 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:57.271823 1039759 cri.go:89] found id: ""
	I0729 14:41:57.271854 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.271865 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:57.271873 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:57.271937 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:57.308435 1039759 cri.go:89] found id: ""
	I0729 14:41:57.308460 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.308472 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:57.308483 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:57.308506 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:57.359783 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:57.359818 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:57.372669 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:57.372698 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:57.440979 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:57.441004 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:57.441018 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:57.520105 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:57.520139 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:57.295421 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:59.793704 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:58.673850 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:01.172547 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:58.207493 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:00.208108 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:02.208334 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:00.060542 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:00.076125 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:00.076192 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:00.113095 1039759 cri.go:89] found id: ""
	I0729 14:42:00.113129 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.113137 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:00.113150 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:00.113206 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:00.154104 1039759 cri.go:89] found id: ""
	I0729 14:42:00.154132 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.154139 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:00.154146 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:00.154202 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:00.190416 1039759 cri.go:89] found id: ""
	I0729 14:42:00.190443 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.190454 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:00.190462 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:00.190532 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:00.228138 1039759 cri.go:89] found id: ""
	I0729 14:42:00.228173 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.228185 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:00.228192 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:00.228261 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:00.265679 1039759 cri.go:89] found id: ""
	I0729 14:42:00.265706 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.265715 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:00.265721 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:00.265787 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:00.300283 1039759 cri.go:89] found id: ""
	I0729 14:42:00.300315 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.300333 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:00.300341 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:00.300433 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:00.339224 1039759 cri.go:89] found id: ""
	I0729 14:42:00.339255 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.339264 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:00.339270 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:00.339333 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:00.375780 1039759 cri.go:89] found id: ""
	I0729 14:42:00.375815 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.375826 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:00.375836 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:00.375851 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:00.425145 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:00.425190 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:00.438860 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:00.438891 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:00.512668 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:00.512695 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:00.512714 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:00.597083 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:00.597139 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:03.141962 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:03.156295 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:03.156372 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:03.192860 1039759 cri.go:89] found id: ""
	I0729 14:42:03.192891 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.192902 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:03.192911 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:03.192982 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:03.234078 1039759 cri.go:89] found id: ""
	I0729 14:42:03.234104 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.234113 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:03.234119 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:03.234171 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:03.268099 1039759 cri.go:89] found id: ""
	I0729 14:42:03.268124 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.268131 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:03.268138 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:03.268197 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:03.306470 1039759 cri.go:89] found id: ""
	I0729 14:42:03.306498 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.306507 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:03.306513 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:03.306596 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:03.341902 1039759 cri.go:89] found id: ""
	I0729 14:42:03.341933 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.341944 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:03.341952 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:03.342019 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:03.377235 1039759 cri.go:89] found id: ""
	I0729 14:42:03.377271 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.377282 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:03.377291 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:03.377355 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:03.411273 1039759 cri.go:89] found id: ""
	I0729 14:42:03.411308 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.411316 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:03.411322 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:03.411397 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:03.446482 1039759 cri.go:89] found id: ""
	I0729 14:42:03.446511 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.446519 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:03.446530 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:03.446545 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:03.460222 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:03.460262 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:03.548149 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:03.548175 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:03.548191 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:03.640563 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:03.640608 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:03.681685 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:03.681713 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:02.293412 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:04.793239 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:03.174082 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:05.674438 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:04.706798 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:06.707818 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:06.234967 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:06.249656 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:06.249726 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:06.284768 1039759 cri.go:89] found id: ""
	I0729 14:42:06.284798 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.284810 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:06.284822 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:06.284880 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:06.321109 1039759 cri.go:89] found id: ""
	I0729 14:42:06.321140 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.321150 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:06.321158 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:06.321229 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:06.357238 1039759 cri.go:89] found id: ""
	I0729 14:42:06.357269 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.357278 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:06.357284 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:06.357342 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:06.391613 1039759 cri.go:89] found id: ""
	I0729 14:42:06.391643 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.391653 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:06.391661 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:06.391726 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:06.428782 1039759 cri.go:89] found id: ""
	I0729 14:42:06.428813 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.428823 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:06.428831 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:06.428890 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:06.463558 1039759 cri.go:89] found id: ""
	I0729 14:42:06.463596 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.463607 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:06.463615 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:06.463683 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:06.500442 1039759 cri.go:89] found id: ""
	I0729 14:42:06.500474 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.500484 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:06.500501 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:06.500579 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:06.535589 1039759 cri.go:89] found id: ""
	I0729 14:42:06.535627 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.535638 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:06.535650 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:06.535668 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:06.584641 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:06.584676 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:06.597702 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:06.597737 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:06.664499 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:06.664537 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:06.664555 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:06.744808 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:06.744845 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:06.793853 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:09.294853 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:08.172993 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:10.174863 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:08.707874 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:11.209387 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:09.286151 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:09.307822 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:09.307892 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:09.369334 1039759 cri.go:89] found id: ""
	I0729 14:42:09.369363 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.369373 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:09.369381 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:09.369458 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:09.402302 1039759 cri.go:89] found id: ""
	I0729 14:42:09.402334 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.402345 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:09.402353 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:09.402423 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:09.436351 1039759 cri.go:89] found id: ""
	I0729 14:42:09.436380 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.436402 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:09.436429 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:09.436501 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:09.467735 1039759 cri.go:89] found id: ""
	I0729 14:42:09.467768 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.467780 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:09.467788 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:09.467849 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:09.503328 1039759 cri.go:89] found id: ""
	I0729 14:42:09.503355 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.503367 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:09.503376 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:09.503438 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:09.540012 1039759 cri.go:89] found id: ""
	I0729 14:42:09.540039 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.540047 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:09.540053 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:09.540106 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:09.576737 1039759 cri.go:89] found id: ""
	I0729 14:42:09.576801 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.576814 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:09.576822 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:09.576920 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:09.614624 1039759 cri.go:89] found id: ""
	I0729 14:42:09.614651 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.614659 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:09.614669 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:09.614684 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:09.650533 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:09.650580 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:09.709144 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:09.709175 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:09.724147 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:09.724173 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:09.790737 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:09.790760 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:09.790775 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:12.376968 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:12.390344 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:12.390409 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:12.424820 1039759 cri.go:89] found id: ""
	I0729 14:42:12.424849 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.424860 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:12.424876 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:12.424943 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:12.457444 1039759 cri.go:89] found id: ""
	I0729 14:42:12.457480 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.457492 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:12.457500 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:12.457561 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:12.490027 1039759 cri.go:89] found id: ""
	I0729 14:42:12.490058 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.490069 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:12.490077 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:12.490145 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:12.523229 1039759 cri.go:89] found id: ""
	I0729 14:42:12.523256 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.523265 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:12.523270 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:12.523321 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:12.557849 1039759 cri.go:89] found id: ""
	I0729 14:42:12.557875 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.557885 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:12.557891 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:12.557951 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:12.592943 1039759 cri.go:89] found id: ""
	I0729 14:42:12.592973 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.592982 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:12.592989 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:12.593059 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:12.626495 1039759 cri.go:89] found id: ""
	I0729 14:42:12.626531 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.626539 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:12.626557 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:12.626641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:12.663764 1039759 cri.go:89] found id: ""
	I0729 14:42:12.663793 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.663805 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:12.663818 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:12.663835 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:12.722521 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:12.722556 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:12.736476 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:12.736505 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:12.809582 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:12.809617 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:12.809637 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:12.890665 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:12.890712 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:11.793144 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:13.793447 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:15.794630 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:12.673257 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:15.173702 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:13.707929 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:15.707964 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:15.429702 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:15.443258 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:15.443340 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:15.477170 1039759 cri.go:89] found id: ""
	I0729 14:42:15.477198 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.477207 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:15.477212 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:15.477266 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:15.511614 1039759 cri.go:89] found id: ""
	I0729 14:42:15.511652 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.511665 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:15.511671 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:15.511739 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:15.548472 1039759 cri.go:89] found id: ""
	I0729 14:42:15.548501 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.548511 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:15.548519 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:15.548590 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:15.589060 1039759 cri.go:89] found id: ""
	I0729 14:42:15.589090 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.589102 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:15.589110 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:15.589185 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:15.622846 1039759 cri.go:89] found id: ""
	I0729 14:42:15.622873 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.622882 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:15.622887 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:15.622943 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:15.656193 1039759 cri.go:89] found id: ""
	I0729 14:42:15.656220 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.656229 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:15.656237 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:15.656307 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:15.691301 1039759 cri.go:89] found id: ""
	I0729 14:42:15.691336 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.691348 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:15.691357 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:15.691420 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:15.729923 1039759 cri.go:89] found id: ""
	I0729 14:42:15.729963 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.729974 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:15.729988 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:15.730004 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:15.783531 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:15.783569 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:15.799590 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:15.799619 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:15.874849 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:15.874886 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:15.874901 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:15.957384 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:15.957424 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:18.497035 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:18.511538 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:18.511616 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:18.550512 1039759 cri.go:89] found id: ""
	I0729 14:42:18.550552 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.550573 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:18.550582 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:18.550642 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:18.585910 1039759 cri.go:89] found id: ""
	I0729 14:42:18.585942 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.585954 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:18.585962 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:18.586031 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:18.619680 1039759 cri.go:89] found id: ""
	I0729 14:42:18.619712 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.619722 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:18.619730 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:18.619799 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:18.651559 1039759 cri.go:89] found id: ""
	I0729 14:42:18.651592 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.651604 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:18.651613 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:18.651688 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:18.686668 1039759 cri.go:89] found id: ""
	I0729 14:42:18.686693 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.686701 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:18.686711 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:18.686764 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:18.722832 1039759 cri.go:89] found id: ""
	I0729 14:42:18.722859 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.722869 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:18.722876 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:18.722927 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:18.758261 1039759 cri.go:89] found id: ""
	I0729 14:42:18.758289 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.758302 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:18.758310 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:18.758378 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:18.795190 1039759 cri.go:89] found id: ""
	I0729 14:42:18.795216 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.795227 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:18.795237 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:18.795251 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:18.835331 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:18.835366 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:17.796916 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:20.294082 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:17.673000 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:19.674010 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:18.209178 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:20.707421 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:18.889707 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:18.889745 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:18.902477 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:18.902503 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:18.970712 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:18.970735 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:18.970748 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:21.552092 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:21.566581 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:21.566669 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:21.600230 1039759 cri.go:89] found id: ""
	I0729 14:42:21.600261 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.600275 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:21.600283 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:21.600346 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:21.636576 1039759 cri.go:89] found id: ""
	I0729 14:42:21.636616 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.636627 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:21.636635 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:21.636705 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:21.672944 1039759 cri.go:89] found id: ""
	I0729 14:42:21.672973 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.672984 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:21.672997 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:21.673063 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:21.708555 1039759 cri.go:89] found id: ""
	I0729 14:42:21.708582 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.708601 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:21.708613 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:21.708673 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:21.744862 1039759 cri.go:89] found id: ""
	I0729 14:42:21.744891 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.744902 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:21.744908 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:21.744973 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:21.779084 1039759 cri.go:89] found id: ""
	I0729 14:42:21.779111 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.779119 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:21.779126 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:21.779183 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:21.819931 1039759 cri.go:89] found id: ""
	I0729 14:42:21.819972 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.819981 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:21.819989 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:21.820047 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:21.855472 1039759 cri.go:89] found id: ""
	I0729 14:42:21.855500 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.855509 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:21.855522 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:21.855539 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:21.925561 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:21.925579 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:21.925596 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:22.015986 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:22.016032 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:22.059898 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:22.059935 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:22.129018 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:22.129055 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:21.787886 1039263 pod_ready.go:81] duration metric: took 4m0.000465481s for pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace to be "Ready" ...
	E0729 14:42:21.787929 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0729 14:42:21.787945 1039263 pod_ready.go:38] duration metric: took 4m5.237036546s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:42:21.787973 1039263 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:42:21.788025 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:21.788089 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:21.857594 1039263 cri.go:89] found id: "0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:21.857613 1039263 cri.go:89] found id: ""
	I0729 14:42:21.857620 1039263 logs.go:276] 1 containers: [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8]
	I0729 14:42:21.857674 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:21.862462 1039263 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:21.862523 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:21.903562 1039263 cri.go:89] found id: "759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:21.903594 1039263 cri.go:89] found id: ""
	I0729 14:42:21.903604 1039263 logs.go:276] 1 containers: [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1]
	I0729 14:42:21.903660 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:21.908232 1039263 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:21.908327 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:21.947632 1039263 cri.go:89] found id: "cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:21.947663 1039263 cri.go:89] found id: ""
	I0729 14:42:21.947674 1039263 logs.go:276] 1 containers: [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d]
	I0729 14:42:21.947737 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:21.952576 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:21.952649 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:21.995318 1039263 cri.go:89] found id: "ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:21.995343 1039263 cri.go:89] found id: ""
	I0729 14:42:21.995351 1039263 logs.go:276] 1 containers: [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40]
	I0729 14:42:21.995418 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.000352 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:22.000440 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:22.040544 1039263 cri.go:89] found id: "1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:22.040572 1039263 cri.go:89] found id: ""
	I0729 14:42:22.040582 1039263 logs.go:276] 1 containers: [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b]
	I0729 14:42:22.040648 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.044840 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:22.044910 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:22.090787 1039263 cri.go:89] found id: "d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:22.090816 1039263 cri.go:89] found id: ""
	I0729 14:42:22.090827 1039263 logs.go:276] 1 containers: [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322]
	I0729 14:42:22.090897 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.096748 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:22.096826 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:22.143491 1039263 cri.go:89] found id: ""
	I0729 14:42:22.143522 1039263 logs.go:276] 0 containers: []
	W0729 14:42:22.143534 1039263 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:22.143541 1039263 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 14:42:22.143609 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 14:42:22.179378 1039263 cri.go:89] found id: "bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:22.179404 1039263 cri.go:89] found id: "40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:22.179409 1039263 cri.go:89] found id: ""
	I0729 14:42:22.179419 1039263 logs.go:276] 2 containers: [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4]
	I0729 14:42:22.179482 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.184686 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.189009 1039263 logs.go:123] Gathering logs for etcd [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1] ...
	I0729 14:42:22.189029 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:22.250475 1039263 logs.go:123] Gathering logs for kube-scheduler [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40] ...
	I0729 14:42:22.250510 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:22.286581 1039263 logs.go:123] Gathering logs for kube-proxy [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b] ...
	I0729 14:42:22.286622 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:22.325541 1039263 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:22.325570 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:22.831822 1039263 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:22.831875 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:22.846540 1039263 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:22.846588 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 14:42:22.970758 1039263 logs.go:123] Gathering logs for coredns [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d] ...
	I0729 14:42:22.970796 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:23.013428 1039263 logs.go:123] Gathering logs for kube-controller-manager [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322] ...
	I0729 14:42:23.013467 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:23.064784 1039263 logs.go:123] Gathering logs for storage-provisioner [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a] ...
	I0729 14:42:23.064820 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:23.111615 1039263 logs.go:123] Gathering logs for storage-provisioner [40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4] ...
	I0729 14:42:23.111653 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:23.151296 1039263 logs.go:123] Gathering logs for container status ...
	I0729 14:42:23.151328 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:23.198650 1039263 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:23.198692 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:23.259196 1039263 logs.go:123] Gathering logs for kube-apiserver [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8] ...
	I0729 14:42:23.259247 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:25.808980 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:25.829180 1039263 api_server.go:72] duration metric: took 4m16.997740137s to wait for apiserver process to appear ...
	I0729 14:42:25.829211 1039263 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:42:25.829260 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:25.829335 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:25.875138 1039263 cri.go:89] found id: "0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:25.875167 1039263 cri.go:89] found id: ""
	I0729 14:42:25.875175 1039263 logs.go:276] 1 containers: [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8]
	I0729 14:42:25.875230 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:25.879855 1039263 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:25.879937 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:25.916938 1039263 cri.go:89] found id: "759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:25.916964 1039263 cri.go:89] found id: ""
	I0729 14:42:25.916974 1039263 logs.go:276] 1 containers: [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1]
	I0729 14:42:25.917036 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:25.921166 1039263 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:25.921224 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:25.958196 1039263 cri.go:89] found id: "cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:25.958224 1039263 cri.go:89] found id: ""
	I0729 14:42:25.958234 1039263 logs.go:276] 1 containers: [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d]
	I0729 14:42:25.958300 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:25.962697 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:25.962760 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:26.000162 1039263 cri.go:89] found id: "ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:26.000195 1039263 cri.go:89] found id: ""
	I0729 14:42:26.000206 1039263 logs.go:276] 1 containers: [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40]
	I0729 14:42:26.000277 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.004518 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:26.004594 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:26.041099 1039263 cri.go:89] found id: "1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:26.041133 1039263 cri.go:89] found id: ""
	I0729 14:42:26.041144 1039263 logs.go:276] 1 containers: [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b]
	I0729 14:42:26.041208 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.045334 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:26.045412 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:26.082783 1039263 cri.go:89] found id: "d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:26.082815 1039263 cri.go:89] found id: ""
	I0729 14:42:26.082826 1039263 logs.go:276] 1 containers: [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322]
	I0729 14:42:26.082901 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.086996 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:26.087063 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:26.123636 1039263 cri.go:89] found id: ""
	I0729 14:42:26.123677 1039263 logs.go:276] 0 containers: []
	W0729 14:42:26.123688 1039263 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:26.123694 1039263 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 14:42:26.123756 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 14:42:26.163819 1039263 cri.go:89] found id: "bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:26.163849 1039263 cri.go:89] found id: "40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:26.163855 1039263 cri.go:89] found id: ""
	I0729 14:42:26.163864 1039263 logs.go:276] 2 containers: [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4]
	I0729 14:42:26.163929 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.168611 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.173125 1039263 logs.go:123] Gathering logs for kube-scheduler [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40] ...
	I0729 14:42:26.173155 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:22.173593 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:24.173621 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:22.708101 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:25.206661 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:27.207926 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:24.645474 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:24.658107 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:24.658171 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:24.696604 1039759 cri.go:89] found id: ""
	I0729 14:42:24.696635 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.696645 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:24.696653 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:24.696725 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:24.733862 1039759 cri.go:89] found id: ""
	I0729 14:42:24.733887 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.733894 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:24.733901 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:24.733957 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:24.770614 1039759 cri.go:89] found id: ""
	I0729 14:42:24.770644 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.770656 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:24.770664 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:24.770734 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:24.806368 1039759 cri.go:89] found id: ""
	I0729 14:42:24.806394 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.806403 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:24.806408 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:24.806470 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:24.838490 1039759 cri.go:89] found id: ""
	I0729 14:42:24.838526 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.838534 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:24.838541 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:24.838596 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:24.871017 1039759 cri.go:89] found id: ""
	I0729 14:42:24.871043 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.871051 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:24.871057 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:24.871128 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:24.903281 1039759 cri.go:89] found id: ""
	I0729 14:42:24.903311 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.903322 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:24.903330 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:24.903403 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:24.937245 1039759 cri.go:89] found id: ""
	I0729 14:42:24.937279 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.937291 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:24.937304 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:24.937319 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:24.989518 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:24.989551 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:25.005021 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:25.005055 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:25.080849 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:25.080877 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:25.080893 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:25.163742 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:25.163784 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:27.706182 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:27.719350 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:27.719425 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:27.756955 1039759 cri.go:89] found id: ""
	I0729 14:42:27.756982 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.756990 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:27.756997 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:27.757054 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:27.791975 1039759 cri.go:89] found id: ""
	I0729 14:42:27.792014 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.792025 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:27.792033 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:27.792095 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:27.834188 1039759 cri.go:89] found id: ""
	I0729 14:42:27.834215 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.834223 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:27.834230 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:27.834296 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:27.867798 1039759 cri.go:89] found id: ""
	I0729 14:42:27.867834 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.867843 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:27.867851 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:27.867918 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:27.900316 1039759 cri.go:89] found id: ""
	I0729 14:42:27.900343 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.900351 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:27.900357 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:27.900422 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:27.932361 1039759 cri.go:89] found id: ""
	I0729 14:42:27.932391 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.932402 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:27.932425 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:27.932493 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:27.965530 1039759 cri.go:89] found id: ""
	I0729 14:42:27.965562 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.965573 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:27.965581 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:27.965651 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:27.999582 1039759 cri.go:89] found id: ""
	I0729 14:42:27.999608 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.999617 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:27.999626 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:27.999654 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:28.069415 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:28.069438 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:28.069454 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:28.149781 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:28.149821 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:28.190045 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:28.190072 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:28.244147 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:28.244188 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:26.217755 1039263 logs.go:123] Gathering logs for storage-provisioner [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a] ...
	I0729 14:42:26.217796 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:26.257363 1039263 logs.go:123] Gathering logs for storage-provisioner [40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4] ...
	I0729 14:42:26.257399 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:26.297502 1039263 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:26.297534 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:26.729336 1039263 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:26.729370 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:26.779172 1039263 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:26.779213 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:26.794369 1039263 logs.go:123] Gathering logs for etcd [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1] ...
	I0729 14:42:26.794399 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:26.857964 1039263 logs.go:123] Gathering logs for coredns [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d] ...
	I0729 14:42:26.858000 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:26.895052 1039263 logs.go:123] Gathering logs for container status ...
	I0729 14:42:26.895083 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:26.936360 1039263 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:26.936395 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 14:42:27.037118 1039263 logs.go:123] Gathering logs for kube-apiserver [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8] ...
	I0729 14:42:27.037160 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:27.089764 1039263 logs.go:123] Gathering logs for kube-proxy [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b] ...
	I0729 14:42:27.089798 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:27.134009 1039263 logs.go:123] Gathering logs for kube-controller-manager [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322] ...
	I0729 14:42:27.134042 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:29.690960 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:42:29.696457 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 200:
	ok
	I0729 14:42:29.697313 1039263 api_server.go:141] control plane version: v1.30.3
	I0729 14:42:29.697335 1039263 api_server.go:131] duration metric: took 3.868117139s to wait for apiserver health ...
	I0729 14:42:29.697343 1039263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:42:29.697370 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:29.697430 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:29.740594 1039263 cri.go:89] found id: "0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:29.740623 1039263 cri.go:89] found id: ""
	I0729 14:42:29.740633 1039263 logs.go:276] 1 containers: [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8]
	I0729 14:42:29.740696 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.745183 1039263 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:29.745257 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:29.780091 1039263 cri.go:89] found id: "759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:29.780112 1039263 cri.go:89] found id: ""
	I0729 14:42:29.780119 1039263 logs.go:276] 1 containers: [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1]
	I0729 14:42:29.780178 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.784241 1039263 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:29.784305 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:29.825618 1039263 cri.go:89] found id: "cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:29.825641 1039263 cri.go:89] found id: ""
	I0729 14:42:29.825649 1039263 logs.go:276] 1 containers: [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d]
	I0729 14:42:29.825715 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.830291 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:29.830351 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:29.866651 1039263 cri.go:89] found id: "ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:29.866685 1039263 cri.go:89] found id: ""
	I0729 14:42:29.866695 1039263 logs.go:276] 1 containers: [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40]
	I0729 14:42:29.866758 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.871440 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:29.871494 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:29.911944 1039263 cri.go:89] found id: "1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:29.911968 1039263 cri.go:89] found id: ""
	I0729 14:42:29.911976 1039263 logs.go:276] 1 containers: [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b]
	I0729 14:42:29.912037 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.916604 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:29.916680 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:29.954334 1039263 cri.go:89] found id: "d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:29.954361 1039263 cri.go:89] found id: ""
	I0729 14:42:29.954371 1039263 logs.go:276] 1 containers: [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322]
	I0729 14:42:29.954446 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.959051 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:29.959130 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:29.996760 1039263 cri.go:89] found id: ""
	I0729 14:42:29.996795 1039263 logs.go:276] 0 containers: []
	W0729 14:42:29.996804 1039263 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:29.996812 1039263 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 14:42:29.996883 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 14:42:30.034562 1039263 cri.go:89] found id: "bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:30.034598 1039263 cri.go:89] found id: "40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:30.034604 1039263 cri.go:89] found id: ""
	I0729 14:42:30.034614 1039263 logs.go:276] 2 containers: [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4]
	I0729 14:42:30.034682 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:30.039588 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:30.043866 1039263 logs.go:123] Gathering logs for kube-apiserver [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8] ...
	I0729 14:42:30.043889 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:30.091309 1039263 logs.go:123] Gathering logs for etcd [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1] ...
	I0729 14:42:30.091349 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:30.149888 1039263 logs.go:123] Gathering logs for kube-scheduler [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40] ...
	I0729 14:42:30.149926 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:30.189441 1039263 logs.go:123] Gathering logs for kube-controller-manager [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322] ...
	I0729 14:42:30.189479 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:30.250850 1039263 logs.go:123] Gathering logs for storage-provisioner [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a] ...
	I0729 14:42:30.250890 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:30.290077 1039263 logs.go:123] Gathering logs for storage-provisioner [40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4] ...
	I0729 14:42:30.290111 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:30.329035 1039263 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:30.329068 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:30.383068 1039263 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:30.383113 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 14:42:30.497009 1039263 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:30.497045 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:30.914489 1039263 logs.go:123] Gathering logs for container status ...
	I0729 14:42:30.914534 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:30.972901 1039263 logs.go:123] Gathering logs for kube-proxy [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b] ...
	I0729 14:42:30.972951 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:31.021798 1039263 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:31.021838 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:31.040147 1039263 logs.go:123] Gathering logs for coredns [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d] ...
	I0729 14:42:31.040182 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:26.674294 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:29.173375 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:31.173588 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:29.710051 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:32.209382 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:33.593681 1039263 system_pods.go:59] 8 kube-system pods found
	I0729 14:42:33.593711 1039263 system_pods.go:61] "coredns-7db6d8ff4d-6dhzz" [c680e565-fe93-4072-8fe8-6fd440ae5675] Running
	I0729 14:42:33.593716 1039263 system_pods.go:61] "etcd-embed-certs-668123" [3244d6a8-3aa2-406a-86fe-9770f5b8541a] Running
	I0729 14:42:33.593719 1039263 system_pods.go:61] "kube-apiserver-embed-certs-668123" [a00570e4-b496-4083-b280-4125643e475e] Running
	I0729 14:42:33.593723 1039263 system_pods.go:61] "kube-controller-manager-embed-certs-668123" [cec685e1-4d5f-4210-a115-e3766c962f07] Running
	I0729 14:42:33.593725 1039263 system_pods.go:61] "kube-proxy-2v79q" [e43e850d-b94e-467c-bf0f-0eac3828f54f] Running
	I0729 14:42:33.593728 1039263 system_pods.go:61] "kube-scheduler-embed-certs-668123" [4037d948-faed-49c9-b321-6a4be51b9ea9] Running
	I0729 14:42:33.593733 1039263 system_pods.go:61] "metrics-server-569cc877fc-5msnp" [eb9cd6f7-caf5-4b18-b0d6-0f01add839ce] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:42:33.593736 1039263 system_pods.go:61] "storage-provisioner" [ecdab0df-406c-4f3c-b8fe-34a48b7f1e0a] Running
	I0729 14:42:33.593744 1039263 system_pods.go:74] duration metric: took 3.896394577s to wait for pod list to return data ...
	I0729 14:42:33.593751 1039263 default_sa.go:34] waiting for default service account to be created ...
	I0729 14:42:33.596176 1039263 default_sa.go:45] found service account: "default"
	I0729 14:42:33.596197 1039263 default_sa.go:55] duration metric: took 2.440561ms for default service account to be created ...
	I0729 14:42:33.596205 1039263 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 14:42:33.601830 1039263 system_pods.go:86] 8 kube-system pods found
	I0729 14:42:33.601855 1039263 system_pods.go:89] "coredns-7db6d8ff4d-6dhzz" [c680e565-fe93-4072-8fe8-6fd440ae5675] Running
	I0729 14:42:33.601861 1039263 system_pods.go:89] "etcd-embed-certs-668123" [3244d6a8-3aa2-406a-86fe-9770f5b8541a] Running
	I0729 14:42:33.601866 1039263 system_pods.go:89] "kube-apiserver-embed-certs-668123" [a00570e4-b496-4083-b280-4125643e475e] Running
	I0729 14:42:33.601871 1039263 system_pods.go:89] "kube-controller-manager-embed-certs-668123" [cec685e1-4d5f-4210-a115-e3766c962f07] Running
	I0729 14:42:33.601878 1039263 system_pods.go:89] "kube-proxy-2v79q" [e43e850d-b94e-467c-bf0f-0eac3828f54f] Running
	I0729 14:42:33.601887 1039263 system_pods.go:89] "kube-scheduler-embed-certs-668123" [4037d948-faed-49c9-b321-6a4be51b9ea9] Running
	I0729 14:42:33.601897 1039263 system_pods.go:89] "metrics-server-569cc877fc-5msnp" [eb9cd6f7-caf5-4b18-b0d6-0f01add839ce] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:42:33.601908 1039263 system_pods.go:89] "storage-provisioner" [ecdab0df-406c-4f3c-b8fe-34a48b7f1e0a] Running
	I0729 14:42:33.601921 1039263 system_pods.go:126] duration metric: took 5.70985ms to wait for k8s-apps to be running ...
	I0729 14:42:33.601934 1039263 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 14:42:33.601994 1039263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:42:33.620869 1039263 system_svc.go:56] duration metric: took 18.921974ms WaitForService to wait for kubelet
	I0729 14:42:33.620907 1039263 kubeadm.go:582] duration metric: took 4m24.7894747s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:42:33.620939 1039263 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:42:33.623517 1039263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:42:33.623538 1039263 node_conditions.go:123] node cpu capacity is 2
	I0729 14:42:33.623562 1039263 node_conditions.go:105] duration metric: took 2.617272ms to run NodePressure ...
	I0729 14:42:33.623582 1039263 start.go:241] waiting for startup goroutines ...
	I0729 14:42:33.623591 1039263 start.go:246] waiting for cluster config update ...
	I0729 14:42:33.623601 1039263 start.go:255] writing updated cluster config ...
	I0729 14:42:33.623897 1039263 ssh_runner.go:195] Run: rm -f paused
	I0729 14:42:33.677961 1039263 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 14:42:33.679952 1039263 out.go:177] * Done! kubectl is now configured to use "embed-certs-668123" cluster and "default" namespace by default
	I0729 14:42:30.758335 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:30.771788 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:30.771860 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:30.807608 1039759 cri.go:89] found id: ""
	I0729 14:42:30.807633 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.807641 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:30.807647 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:30.807709 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:30.842361 1039759 cri.go:89] found id: ""
	I0729 14:42:30.842389 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.842397 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:30.842404 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:30.842474 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:30.879123 1039759 cri.go:89] found id: ""
	I0729 14:42:30.879149 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.879157 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:30.879162 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:30.879228 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:30.913042 1039759 cri.go:89] found id: ""
	I0729 14:42:30.913072 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.913084 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:30.913092 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:30.913162 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:30.949867 1039759 cri.go:89] found id: ""
	I0729 14:42:30.949900 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.949910 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:30.949919 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:30.949988 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:30.997468 1039759 cri.go:89] found id: ""
	I0729 14:42:30.997497 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.997509 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:30.997516 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:30.997606 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:31.039611 1039759 cri.go:89] found id: ""
	I0729 14:42:31.039643 1039759 logs.go:276] 0 containers: []
	W0729 14:42:31.039654 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:31.039662 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:31.039730 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:31.085802 1039759 cri.go:89] found id: ""
	I0729 14:42:31.085839 1039759 logs.go:276] 0 containers: []
	W0729 14:42:31.085851 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:31.085862 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:31.085890 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:31.155919 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:31.155941 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:31.155954 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:31.232795 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:31.232833 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:31.270647 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:31.270682 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:31.324648 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:31.324685 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:33.839801 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:33.853358 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:33.853417 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:33.674345 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:36.174468 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:34.707752 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:37.209918 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:33.889294 1039759 cri.go:89] found id: ""
	I0729 14:42:33.889323 1039759 logs.go:276] 0 containers: []
	W0729 14:42:33.889334 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:33.889342 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:33.889413 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:33.930106 1039759 cri.go:89] found id: ""
	I0729 14:42:33.930130 1039759 logs.go:276] 0 containers: []
	W0729 14:42:33.930142 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:33.930149 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:33.930211 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:33.973607 1039759 cri.go:89] found id: ""
	I0729 14:42:33.973634 1039759 logs.go:276] 0 containers: []
	W0729 14:42:33.973646 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:33.973654 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:33.973715 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:34.010103 1039759 cri.go:89] found id: ""
	I0729 14:42:34.010133 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.010142 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:34.010149 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:34.010209 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:34.044050 1039759 cri.go:89] found id: ""
	I0729 14:42:34.044080 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.044092 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:34.044099 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:34.044174 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:34.081222 1039759 cri.go:89] found id: ""
	I0729 14:42:34.081250 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.081260 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:34.081268 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:34.081360 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:34.115837 1039759 cri.go:89] found id: ""
	I0729 14:42:34.115878 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.115891 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:34.115899 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:34.115973 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:34.151086 1039759 cri.go:89] found id: ""
	I0729 14:42:34.151116 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.151126 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:34.151139 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:34.151156 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:34.164058 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:34.164087 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:34.238481 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:34.238503 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:34.238518 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:34.316236 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:34.316279 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:34.356281 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:34.356316 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:36.910374 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:36.924907 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:36.925008 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:36.960508 1039759 cri.go:89] found id: ""
	I0729 14:42:36.960535 1039759 logs.go:276] 0 containers: []
	W0729 14:42:36.960543 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:36.960550 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:36.960631 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:36.999840 1039759 cri.go:89] found id: ""
	I0729 14:42:36.999869 1039759 logs.go:276] 0 containers: []
	W0729 14:42:36.999881 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:36.999889 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:36.999960 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:37.032801 1039759 cri.go:89] found id: ""
	I0729 14:42:37.032832 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.032840 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:37.032847 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:37.032907 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:37.066359 1039759 cri.go:89] found id: ""
	I0729 14:42:37.066386 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.066394 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:37.066401 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:37.066454 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:37.103816 1039759 cri.go:89] found id: ""
	I0729 14:42:37.103844 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.103852 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:37.103859 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:37.103922 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:37.137135 1039759 cri.go:89] found id: ""
	I0729 14:42:37.137175 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.137186 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:37.137194 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:37.137267 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:37.170819 1039759 cri.go:89] found id: ""
	I0729 14:42:37.170851 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.170863 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:37.170871 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:37.170941 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:37.206427 1039759 cri.go:89] found id: ""
	I0729 14:42:37.206456 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.206467 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:37.206478 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:37.206492 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:37.287119 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:37.287160 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:37.331090 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:37.331119 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:37.392147 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:37.392189 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:37.406017 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:37.406047 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:37.471644 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:38.673603 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:40.674214 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:39.706915 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:41.201453 1039440 pod_ready.go:81] duration metric: took 4m0.000454399s for pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace to be "Ready" ...
	E0729 14:42:41.201488 1039440 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 14:42:41.201514 1039440 pod_ready.go:38] duration metric: took 4m13.052610312s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:42:41.201553 1039440 kubeadm.go:597] duration metric: took 4m22.712976139s to restartPrimaryControlPlane
	W0729 14:42:41.201639 1039440 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 14:42:41.201696 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:42:39.972835 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:39.985878 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:39.985945 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:40.020312 1039759 cri.go:89] found id: ""
	I0729 14:42:40.020349 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.020360 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:40.020368 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:40.020456 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:40.055688 1039759 cri.go:89] found id: ""
	I0729 14:42:40.055721 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.055732 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:40.055740 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:40.055799 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:40.090432 1039759 cri.go:89] found id: ""
	I0729 14:42:40.090463 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.090472 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:40.090478 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:40.090549 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:40.127794 1039759 cri.go:89] found id: ""
	I0729 14:42:40.127823 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.127832 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:40.127838 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:40.127894 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:40.162911 1039759 cri.go:89] found id: ""
	I0729 14:42:40.162944 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.162953 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:40.162959 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:40.163020 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:40.201578 1039759 cri.go:89] found id: ""
	I0729 14:42:40.201608 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.201619 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:40.201625 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:40.201684 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:40.247314 1039759 cri.go:89] found id: ""
	I0729 14:42:40.247340 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.247348 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:40.247363 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:40.247436 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:40.285393 1039759 cri.go:89] found id: ""
	I0729 14:42:40.285422 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.285431 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:40.285440 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:40.285458 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:40.299901 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:40.299933 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:40.372774 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:40.372802 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:40.372821 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:40.454392 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:40.454447 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:40.494641 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:40.494671 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:43.046060 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:43.058790 1039759 kubeadm.go:597] duration metric: took 4m3.37086398s to restartPrimaryControlPlane
	W0729 14:42:43.058888 1039759 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 14:42:43.058920 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:42:43.544647 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:42:43.560304 1039759 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:42:43.570229 1039759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:42:43.579922 1039759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:42:43.579946 1039759 kubeadm.go:157] found existing configuration files:
	
	I0729 14:42:43.580004 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:42:43.589520 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:42:43.589591 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:42:43.600286 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:42:43.611565 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:42:43.611629 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:42:43.623432 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:42:43.633289 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:42:43.633338 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:42:43.643410 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:42:43.653723 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:42:43.653816 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:42:43.663840 1039759 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:42:43.735243 1039759 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 14:42:43.735314 1039759 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:42:43.904148 1039759 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:42:43.904310 1039759 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:42:43.904480 1039759 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 14:42:44.101401 1039759 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:42:44.103392 1039759 out.go:204]   - Generating certificates and keys ...
	I0729 14:42:44.103499 1039759 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:42:44.103580 1039759 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:42:44.103693 1039759 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:42:44.103829 1039759 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:42:44.103944 1039759 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:42:44.104054 1039759 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:42:44.104146 1039759 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:42:44.104360 1039759 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:42:44.104599 1039759 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:42:44.105264 1039759 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:42:44.105363 1039759 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:42:44.105461 1039759 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:42:44.426107 1039759 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:42:44.593004 1039759 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:42:44.845387 1039759 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:42:44.934634 1039759 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:42:44.959808 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:42:44.961918 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:42:44.961990 1039759 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:42:45.117986 1039759 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:42:42.678218 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:45.175453 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:45.119775 1039759 out.go:204]   - Booting up control plane ...
	I0729 14:42:45.119913 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:42:45.121333 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:42:45.123001 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:42:45.123783 1039759 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:42:45.126031 1039759 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 14:42:47.673678 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:49.674212 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:52.173086 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:54.173797 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:56.178948 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:58.674432 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:00.675207 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:03.173621 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:05.175460 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:07.674421 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:09.674478 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:12.882329 1039440 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.680602745s)
	I0729 14:43:12.882426 1039440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:43:12.900267 1039440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:43:12.910750 1039440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:43:12.921172 1039440 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:43:12.921194 1039440 kubeadm.go:157] found existing configuration files:
	
	I0729 14:43:12.921244 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 14:43:12.931186 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:43:12.931243 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:43:12.940800 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 14:43:12.949875 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:43:12.949929 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:43:12.959555 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 14:43:12.968817 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:43:12.968871 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:43:12.978560 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 14:43:12.987657 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:43:12.987700 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:43:12.997142 1039440 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:43:13.057245 1039440 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 14:43:13.057405 1039440 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:43:13.205227 1039440 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:43:13.205381 1039440 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:43:13.205541 1039440 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 14:43:13.404885 1039440 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:43:13.407054 1039440 out.go:204]   - Generating certificates and keys ...
	I0729 14:43:13.407148 1039440 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:43:13.407232 1039440 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:43:13.407329 1039440 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:43:13.407411 1039440 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:43:13.407509 1039440 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:43:13.407598 1039440 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:43:13.407688 1039440 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:43:13.407774 1039440 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:43:13.407889 1039440 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:43:13.408006 1039440 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:43:13.408071 1039440 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:43:13.408177 1039440 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:43:13.563569 1039440 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:43:14.001138 1039440 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 14:43:14.091368 1039440 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:43:14.238732 1039440 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:43:14.344460 1039440 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:43:14.346386 1039440 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:43:14.349309 1039440 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:43:12.174022 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:14.673166 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:14.351183 1039440 out.go:204]   - Booting up control plane ...
	I0729 14:43:14.351293 1039440 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:43:14.351374 1039440 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:43:14.351671 1039440 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:43:14.375878 1039440 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:43:14.377114 1039440 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:43:14.377198 1039440 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:43:14.528561 1039440 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 14:43:14.528665 1039440 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 14:43:15.030447 1039440 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.044001ms
	I0729 14:43:15.030591 1039440 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 14:43:20.033357 1039440 kubeadm.go:310] [api-check] The API server is healthy after 5.002708747s
	I0729 14:43:20.055871 1039440 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 14:43:20.069020 1039440 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 14:43:20.108465 1039440 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 14:43:20.108664 1039440 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-751306 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 14:43:20.124596 1039440 kubeadm.go:310] [bootstrap-token] Using token: vqqt7g.hayxn6bly3sjo08s
	I0729 14:43:20.125995 1039440 out.go:204]   - Configuring RBAC rules ...
	I0729 14:43:20.126124 1039440 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 14:43:20.138826 1039440 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 14:43:20.145976 1039440 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 14:43:20.149166 1039440 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 14:43:20.152875 1039440 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 14:43:20.156268 1039440 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 14:43:20.446117 1039440 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 14:43:20.900251 1039440 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 14:43:21.446105 1039440 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 14:43:21.446920 1039440 kubeadm.go:310] 
	I0729 14:43:21.446984 1039440 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 14:43:21.446992 1039440 kubeadm.go:310] 
	I0729 14:43:21.447057 1039440 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 14:43:21.447063 1039440 kubeadm.go:310] 
	I0729 14:43:21.447084 1039440 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 14:43:21.447133 1039440 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 14:43:21.447176 1039440 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 14:43:21.447182 1039440 kubeadm.go:310] 
	I0729 14:43:21.447233 1039440 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 14:43:21.447242 1039440 kubeadm.go:310] 
	I0729 14:43:21.447310 1039440 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 14:43:21.447334 1039440 kubeadm.go:310] 
	I0729 14:43:21.447408 1039440 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 14:43:21.447515 1039440 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 14:43:21.447574 1039440 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 14:43:21.447582 1039440 kubeadm.go:310] 
	I0729 14:43:21.447652 1039440 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 14:43:21.447722 1039440 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 14:43:21.447728 1039440 kubeadm.go:310] 
	I0729 14:43:21.447799 1039440 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token vqqt7g.hayxn6bly3sjo08s \
	I0729 14:43:21.447903 1039440 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 \
	I0729 14:43:21.447931 1039440 kubeadm.go:310] 	--control-plane 
	I0729 14:43:21.447935 1039440 kubeadm.go:310] 
	I0729 14:43:21.448017 1039440 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 14:43:21.448025 1039440 kubeadm.go:310] 
	I0729 14:43:21.448115 1039440 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token vqqt7g.hayxn6bly3sjo08s \
	I0729 14:43:21.448239 1039440 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 
	I0729 14:43:21.449071 1039440 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:43:21.449117 1039440 cni.go:84] Creating CNI manager for ""
	I0729 14:43:21.449134 1039440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:43:21.450744 1039440 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:43:16.674887 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:19.175478 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:21.452012 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:43:21.464232 1039440 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:43:21.486786 1039440 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 14:43:21.486890 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:21.486887 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-751306 minikube.k8s.io/updated_at=2024_07_29T14_43_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411 minikube.k8s.io/name=default-k8s-diff-port-751306 minikube.k8s.io/primary=true
	I0729 14:43:21.689413 1039440 ops.go:34] apiserver oom_adj: -16
	I0729 14:43:21.697342 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:22.198351 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:21.673361 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:23.674189 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:26.173782 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:22.698043 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:23.198259 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:23.697640 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:24.198325 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:24.697702 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:25.198216 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:25.697625 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:26.197978 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:26.698039 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:27.197794 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:25.126835 1039759 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 14:43:25.127033 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:43:25.127306 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:43:28.174036 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:29.667306 1038758 pod_ready.go:81] duration metric: took 4m0.000473541s for pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace to be "Ready" ...
	E0729 14:43:29.667341 1038758 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 14:43:29.667369 1038758 pod_ready.go:38] duration metric: took 4m13.916299366s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:43:29.667407 1038758 kubeadm.go:597] duration metric: took 4m21.57875039s to restartPrimaryControlPlane
	W0729 14:43:29.667481 1038758 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 14:43:29.667513 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:43:27.698036 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:28.197941 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:28.697839 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:29.197525 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:29.698141 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:30.197670 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:30.697615 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:31.197999 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:31.697648 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:32.197647 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:30.127504 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:43:30.127777 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:43:32.697837 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:33.197692 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:33.697431 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:34.198048 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:34.698439 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:34.802320 1039440 kubeadm.go:1113] duration metric: took 13.31552277s to wait for elevateKubeSystemPrivileges
	I0729 14:43:34.802367 1039440 kubeadm.go:394] duration metric: took 5m16.369033556s to StartCluster
	I0729 14:43:34.802391 1039440 settings.go:142] acquiring lock: {Name:mke61e73d7bb1a5bd9c2f4c9e9bba0a07b199ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:43:34.802488 1039440 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:43:34.804740 1039440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:43:34.805049 1039440 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.233 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:43:34.805148 1039440 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 14:43:34.805251 1039440 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-751306"
	I0729 14:43:34.805262 1039440 config.go:182] Loaded profile config "default-k8s-diff-port-751306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:43:34.805269 1039440 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-751306"
	I0729 14:43:34.805313 1039440 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-751306"
	I0729 14:43:34.805294 1039440 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-751306"
	W0729 14:43:34.805341 1039440 addons.go:243] addon storage-provisioner should already be in state true
	I0729 14:43:34.805358 1039440 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-751306"
	W0729 14:43:34.805369 1039440 addons.go:243] addon metrics-server should already be in state true
	I0729 14:43:34.805396 1039440 host.go:66] Checking if "default-k8s-diff-port-751306" exists ...
	I0729 14:43:34.805325 1039440 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-751306"
	I0729 14:43:34.805396 1039440 host.go:66] Checking if "default-k8s-diff-port-751306" exists ...
	I0729 14:43:34.805838 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.805869 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.805904 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.805928 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.805968 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.806026 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.806625 1039440 out.go:177] * Verifying Kubernetes components...
	I0729 14:43:34.807999 1039440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:43:34.823091 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39133
	I0729 14:43:34.823103 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35809
	I0729 14:43:34.823532 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.823556 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.824084 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.824111 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.824372 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.824399 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.824427 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.824891 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.825049 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38325
	I0729 14:43:34.825140 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.825191 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.825210 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:43:34.825415 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.825927 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.825945 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.826314 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.826903 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.826939 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.829361 1039440 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-751306"
	W0729 14:43:34.829386 1039440 addons.go:243] addon default-storageclass should already be in state true
	I0729 14:43:34.829417 1039440 host.go:66] Checking if "default-k8s-diff-port-751306" exists ...
	I0729 14:43:34.829785 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.829832 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.841752 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44091
	I0729 14:43:34.842232 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.842938 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.842965 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.843370 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38151
	I0729 14:43:34.843397 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.843713 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:43:34.843818 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.844223 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.844247 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.844615 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.844805 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:43:34.846424 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:43:34.846619 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:43:34.848531 1039440 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 14:43:34.848918 1039440 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:43:34.849006 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35785
	I0729 14:43:34.849421 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.849852 1039440 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 14:43:34.849870 1039440 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 14:43:34.849888 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:43:34.850037 1039440 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:43:34.850053 1039440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 14:43:34.850069 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:43:34.850233 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.850251 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.850659 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.851665 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.851781 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.853937 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.854441 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.854518 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:43:34.854540 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.854589 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:43:34.854779 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:43:34.855035 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:43:34.855098 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:43:34.855114 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.855169 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:43:34.855465 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:43:34.855658 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:43:34.855828 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:43:34.856191 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:43:34.869648 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38917
	I0729 14:43:34.870131 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.870600 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.870618 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.871134 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.871334 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:43:34.873088 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:43:34.873340 1039440 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 14:43:34.873353 1039440 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 14:43:34.873369 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:43:34.876289 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.876751 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:43:34.876765 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.876952 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:43:34.877132 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:43:34.877267 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:43:34.877375 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:43:35.022897 1039440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:43:35.044537 1039440 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-751306" to be "Ready" ...
	I0729 14:43:35.057697 1039440 node_ready.go:49] node "default-k8s-diff-port-751306" has status "Ready":"True"
	I0729 14:43:35.057729 1039440 node_ready.go:38] duration metric: took 13.149458ms for node "default-k8s-diff-port-751306" to be "Ready" ...
	I0729 14:43:35.057744 1039440 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:43:35.073050 1039440 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7qhqh" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:35.150661 1039440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 14:43:35.170721 1039440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:43:35.228871 1039440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 14:43:35.228903 1039440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 14:43:35.276845 1039440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 14:43:35.276880 1039440 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 14:43:35.335623 1039440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:43:35.335656 1039440 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 14:43:35.407804 1039440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:43:35.446540 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.446567 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.446927 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:35.446959 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.446972 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:35.446985 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.446991 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.447286 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.447307 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:35.454199 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.454216 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.454476 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.454495 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:35.824592 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.824615 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.825058 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:35.825441 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.825525 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:35.825567 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.825576 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.827444 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.827454 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:35.827465 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:36.331175 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:36.331202 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:36.331575 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:36.331597 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:36.331607 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:36.331616 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:36.331623 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:36.331923 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:36.331961 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:36.331986 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:36.332003 1039440 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-751306"
	I0729 14:43:36.333995 1039440 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 14:43:36.335441 1039440 addons.go:510] duration metric: took 1.53029708s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 14:43:37.081992 1039440 pod_ready.go:92] pod "coredns-7db6d8ff4d-7qhqh" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.082019 1039440 pod_ready.go:81] duration metric: took 2.008931409s for pod "coredns-7db6d8ff4d-7qhqh" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.082031 1039440 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zxmwx" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.086173 1039440 pod_ready.go:92] pod "coredns-7db6d8ff4d-zxmwx" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.086194 1039440 pod_ready.go:81] duration metric: took 4.154163ms for pod "coredns-7db6d8ff4d-zxmwx" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.086203 1039440 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.090617 1039440 pod_ready.go:92] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.090636 1039440 pod_ready.go:81] duration metric: took 4.42625ms for pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.090647 1039440 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.094929 1039440 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.094950 1039440 pod_ready.go:81] duration metric: took 4.296245ms for pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.094962 1039440 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.099462 1039440 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.099483 1039440 pod_ready.go:81] duration metric: took 4.513354ms for pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.099495 1039440 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tqtjx" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.478252 1039440 pod_ready.go:92] pod "kube-proxy-tqtjx" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.478281 1039440 pod_ready.go:81] duration metric: took 378.778206ms for pod "kube-proxy-tqtjx" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.478295 1039440 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.878655 1039440 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.878678 1039440 pod_ready.go:81] duration metric: took 400.374407ms for pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.878686 1039440 pod_ready.go:38] duration metric: took 2.820929833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:43:37.878702 1039440 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:43:37.878752 1039440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:43:37.894699 1039440 api_server.go:72] duration metric: took 3.08960429s to wait for apiserver process to appear ...
	I0729 14:43:37.894730 1039440 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:43:37.894767 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:43:37.899710 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 200:
	ok
	I0729 14:43:37.900733 1039440 api_server.go:141] control plane version: v1.30.3
	I0729 14:43:37.900757 1039440 api_server.go:131] duration metric: took 6.019707ms to wait for apiserver health ...
	I0729 14:43:37.900765 1039440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:43:38.083157 1039440 system_pods.go:59] 9 kube-system pods found
	I0729 14:43:38.083197 1039440 system_pods.go:61] "coredns-7db6d8ff4d-7qhqh" [88941d43-c67d-4190-896c-edfc4c96b9a8] Running
	I0729 14:43:38.083204 1039440 system_pods.go:61] "coredns-7db6d8ff4d-zxmwx" [13b78c9b-97dc-4313-92d1-76fab481b276] Running
	I0729 14:43:38.083210 1039440 system_pods.go:61] "etcd-default-k8s-diff-port-751306" [11d5216e-a3e3-4ac8-9b00-1b1b04bb1c3e] Running
	I0729 14:43:38.083215 1039440 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-751306" [f9f539b1-374e-4214-b4ac-d6bcb60ca022] Running
	I0729 14:43:38.083221 1039440 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-751306" [07af9a19-2d14-4727-b7b0-ad2f297c1d1a] Running
	I0729 14:43:38.083226 1039440 system_pods.go:61] "kube-proxy-tqtjx" [bd100e13-d714-4ddb-ba43-44be43035b3f] Running
	I0729 14:43:38.083231 1039440 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-751306" [03603694-d75d-4073-8ce9-0ed9bbbe150a] Running
	I0729 14:43:38.083240 1039440 system_pods.go:61] "metrics-server-569cc877fc-z9wg5" [f022dfec-8e97-4679-a7dd-739c9231af82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:43:38.083246 1039440 system_pods.go:61] "storage-provisioner" [a8bf282a-27e8-43f9-a2ac-af6000a4decc] Running
	I0729 14:43:38.083255 1039440 system_pods.go:74] duration metric: took 182.484884ms to wait for pod list to return data ...
	I0729 14:43:38.083269 1039440 default_sa.go:34] waiting for default service account to be created ...
	I0729 14:43:38.277387 1039440 default_sa.go:45] found service account: "default"
	I0729 14:43:38.277418 1039440 default_sa.go:55] duration metric: took 194.142035ms for default service account to be created ...
	I0729 14:43:38.277429 1039440 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 14:43:38.481158 1039440 system_pods.go:86] 9 kube-system pods found
	I0729 14:43:38.481194 1039440 system_pods.go:89] "coredns-7db6d8ff4d-7qhqh" [88941d43-c67d-4190-896c-edfc4c96b9a8] Running
	I0729 14:43:38.481202 1039440 system_pods.go:89] "coredns-7db6d8ff4d-zxmwx" [13b78c9b-97dc-4313-92d1-76fab481b276] Running
	I0729 14:43:38.481210 1039440 system_pods.go:89] "etcd-default-k8s-diff-port-751306" [11d5216e-a3e3-4ac8-9b00-1b1b04bb1c3e] Running
	I0729 14:43:38.481217 1039440 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-751306" [f9f539b1-374e-4214-b4ac-d6bcb60ca022] Running
	I0729 14:43:38.481225 1039440 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-751306" [07af9a19-2d14-4727-b7b0-ad2f297c1d1a] Running
	I0729 14:43:38.481230 1039440 system_pods.go:89] "kube-proxy-tqtjx" [bd100e13-d714-4ddb-ba43-44be43035b3f] Running
	I0729 14:43:38.481236 1039440 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-751306" [03603694-d75d-4073-8ce9-0ed9bbbe150a] Running
	I0729 14:43:38.481248 1039440 system_pods.go:89] "metrics-server-569cc877fc-z9wg5" [f022dfec-8e97-4679-a7dd-739c9231af82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:43:38.481255 1039440 system_pods.go:89] "storage-provisioner" [a8bf282a-27e8-43f9-a2ac-af6000a4decc] Running
	I0729 14:43:38.481267 1039440 system_pods.go:126] duration metric: took 203.830126ms to wait for k8s-apps to be running ...
	I0729 14:43:38.481280 1039440 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 14:43:38.481329 1039440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:43:38.496175 1039440 system_svc.go:56] duration metric: took 14.88714ms WaitForService to wait for kubelet
	I0729 14:43:38.496209 1039440 kubeadm.go:582] duration metric: took 3.691120463s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:43:38.496237 1039440 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:43:38.677820 1039440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:43:38.677847 1039440 node_conditions.go:123] node cpu capacity is 2
	I0729 14:43:38.677859 1039440 node_conditions.go:105] duration metric: took 181.616437ms to run NodePressure ...
	I0729 14:43:38.677874 1039440 start.go:241] waiting for startup goroutines ...
	I0729 14:43:38.677882 1039440 start.go:246] waiting for cluster config update ...
	I0729 14:43:38.677894 1039440 start.go:255] writing updated cluster config ...
	I0729 14:43:38.678166 1039440 ssh_runner.go:195] Run: rm -f paused
	I0729 14:43:38.728616 1039440 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 14:43:38.730494 1039440 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-751306" cluster and "default" namespace by default
	I0729 14:43:40.128244 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:43:40.128447 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:43:55.945251 1038758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.277690166s)
	I0729 14:43:55.945335 1038758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:43:55.960870 1038758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:43:55.971175 1038758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:43:55.981424 1038758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:43:55.981456 1038758 kubeadm.go:157] found existing configuration files:
	
	I0729 14:43:55.981512 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:43:55.992098 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:43:55.992165 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:43:56.002242 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:43:56.011416 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:43:56.011486 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:43:56.020848 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:43:56.030219 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:43:56.030280 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:43:56.039957 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:43:56.049607 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:43:56.049670 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:43:56.059413 1038758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:43:56.109453 1038758 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 14:43:56.109563 1038758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:43:56.230876 1038758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:43:56.231018 1038758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:43:56.231126 1038758 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 14:43:56.244355 1038758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:43:56.246461 1038758 out.go:204]   - Generating certificates and keys ...
	I0729 14:43:56.246573 1038758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:43:56.246666 1038758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:43:56.246755 1038758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:43:56.246843 1038758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:43:56.246964 1038758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:43:56.247169 1038758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:43:56.247267 1038758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:43:56.247365 1038758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:43:56.247473 1038758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:43:56.247588 1038758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:43:56.247646 1038758 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:43:56.247718 1038758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:43:56.593641 1038758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:43:56.714510 1038758 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 14:43:56.862780 1038758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:43:57.010367 1038758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:43:57.108324 1038758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:43:57.109028 1038758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:43:57.111425 1038758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:43:57.113088 1038758 out.go:204]   - Booting up control plane ...
	I0729 14:43:57.113217 1038758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:43:57.113336 1038758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:43:57.113501 1038758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:43:57.135168 1038758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:43:57.141915 1038758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:43:57.142022 1038758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:43:57.269947 1038758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 14:43:57.270056 1038758 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 14:43:57.772110 1038758 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.03343ms
	I0729 14:43:57.772229 1038758 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 14:44:02.773898 1038758 kubeadm.go:310] [api-check] The API server is healthy after 5.00168383s
	I0729 14:44:02.788629 1038758 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 14:44:02.805813 1038758 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 14:44:02.831687 1038758 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 14:44:02.831963 1038758 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-603534 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 14:44:02.842427 1038758 kubeadm.go:310] [bootstrap-token] Using token: hg3j3v.551bb9ju0g9ic9e6
	I0729 14:44:00.129004 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:44:00.129267 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:44:02.844018 1038758 out.go:204]   - Configuring RBAC rules ...
	I0729 14:44:02.844160 1038758 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 14:44:02.851693 1038758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 14:44:02.859496 1038758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 14:44:02.863556 1038758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 14:44:02.866896 1038758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 14:44:02.871375 1038758 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 14:44:03.181687 1038758 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 14:44:03.618445 1038758 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 14:44:04.184562 1038758 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 14:44:04.185548 1038758 kubeadm.go:310] 
	I0729 14:44:04.185655 1038758 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 14:44:04.185689 1038758 kubeadm.go:310] 
	I0729 14:44:04.185788 1038758 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 14:44:04.185801 1038758 kubeadm.go:310] 
	I0729 14:44:04.185825 1038758 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 14:44:04.185906 1038758 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 14:44:04.185983 1038758 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 14:44:04.185992 1038758 kubeadm.go:310] 
	I0729 14:44:04.186079 1038758 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 14:44:04.186090 1038758 kubeadm.go:310] 
	I0729 14:44:04.186155 1038758 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 14:44:04.186165 1038758 kubeadm.go:310] 
	I0729 14:44:04.186231 1038758 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 14:44:04.186337 1038758 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 14:44:04.186431 1038758 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 14:44:04.186441 1038758 kubeadm.go:310] 
	I0729 14:44:04.186575 1038758 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 14:44:04.186679 1038758 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 14:44:04.186689 1038758 kubeadm.go:310] 
	I0729 14:44:04.186810 1038758 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hg3j3v.551bb9ju0g9ic9e6 \
	I0729 14:44:04.186944 1038758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 \
	I0729 14:44:04.186974 1038758 kubeadm.go:310] 	--control-plane 
	I0729 14:44:04.186984 1038758 kubeadm.go:310] 
	I0729 14:44:04.187102 1038758 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 14:44:04.187111 1038758 kubeadm.go:310] 
	I0729 14:44:04.187224 1038758 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hg3j3v.551bb9ju0g9ic9e6 \
	I0729 14:44:04.187375 1038758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 
	I0729 14:44:04.188377 1038758 kubeadm.go:310] W0729 14:43:56.090027    2906 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 14:44:04.188711 1038758 kubeadm.go:310] W0729 14:43:56.090887    2906 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 14:44:04.188834 1038758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:44:04.188852 1038758 cni.go:84] Creating CNI manager for ""
	I0729 14:44:04.188863 1038758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:44:04.190535 1038758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:44:04.191948 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:44:04.203414 1038758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:44:04.223025 1038758 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 14:44:04.223114 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:04.223132 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-603534 minikube.k8s.io/updated_at=2024_07_29T14_44_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411 minikube.k8s.io/name=no-preload-603534 minikube.k8s.io/primary=true
	I0729 14:44:04.240353 1038758 ops.go:34] apiserver oom_adj: -16
	I0729 14:44:04.442077 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:04.942458 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:05.442843 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:05.942138 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:06.442232 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:06.942611 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:07.442939 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:07.942661 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:08.443044 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:08.522590 1038758 kubeadm.go:1113] duration metric: took 4.299548803s to wait for elevateKubeSystemPrivileges
	I0729 14:44:08.522633 1038758 kubeadm.go:394] duration metric: took 5m0.491164642s to StartCluster
	I0729 14:44:08.522657 1038758 settings.go:142] acquiring lock: {Name:mke61e73d7bb1a5bd9c2f4c9e9bba0a07b199ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:44:08.522755 1038758 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:44:08.524573 1038758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:44:08.524893 1038758 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:44:08.524999 1038758 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 14:44:08.525112 1038758 addons.go:69] Setting storage-provisioner=true in profile "no-preload-603534"
	I0729 14:44:08.525150 1038758 addons.go:234] Setting addon storage-provisioner=true in "no-preload-603534"
	I0729 14:44:08.525146 1038758 addons.go:69] Setting default-storageclass=true in profile "no-preload-603534"
	I0729 14:44:08.525155 1038758 config.go:182] Loaded profile config "no-preload-603534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 14:44:08.525167 1038758 addons.go:69] Setting metrics-server=true in profile "no-preload-603534"
	I0729 14:44:08.525182 1038758 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-603534"
	W0729 14:44:08.525162 1038758 addons.go:243] addon storage-provisioner should already be in state true
	I0729 14:44:08.525229 1038758 host.go:66] Checking if "no-preload-603534" exists ...
	I0729 14:44:08.525185 1038758 addons.go:234] Setting addon metrics-server=true in "no-preload-603534"
	W0729 14:44:08.525264 1038758 addons.go:243] addon metrics-server should already be in state true
	I0729 14:44:08.525294 1038758 host.go:66] Checking if "no-preload-603534" exists ...
	I0729 14:44:08.525510 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.525553 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.525652 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.525668 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.525688 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.525715 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.526581 1038758 out.go:177] * Verifying Kubernetes components...
	I0729 14:44:08.527919 1038758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:44:08.541874 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43521
	I0729 14:44:08.542126 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34697
	I0729 14:44:08.542251 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35699
	I0729 14:44:08.542397 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.542505 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.542664 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.542948 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.542969 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.543075 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.543090 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.543115 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.543127 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.543323 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.543546 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.543551 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.543758 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.543779 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.544014 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.544035 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.544149 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:44:08.548026 1038758 addons.go:234] Setting addon default-storageclass=true in "no-preload-603534"
	W0729 14:44:08.548048 1038758 addons.go:243] addon default-storageclass should already be in state true
	I0729 14:44:08.548079 1038758 host.go:66] Checking if "no-preload-603534" exists ...
	I0729 14:44:08.548457 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.548478 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.559699 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36211
	I0729 14:44:08.560297 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.560916 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.560953 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.561332 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.561519 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:44:08.563422 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:44:08.564073 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42429
	I0729 14:44:08.564524 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.565011 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.565038 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.565427 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.565592 1038758 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 14:44:08.565752 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:44:08.566901 1038758 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 14:44:08.566921 1038758 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 14:44:08.566941 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:44:08.567688 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:44:08.568067 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34485
	I0729 14:44:08.568443 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.569019 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.569040 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.569462 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.569583 1038758 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:44:08.570038 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.570074 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.571187 1038758 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:44:08.571204 1038758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 14:44:08.571223 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:44:08.571595 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.572203 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:44:08.572247 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.572506 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:44:08.572704 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:44:08.572893 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:44:08.573100 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:44:08.574551 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.574900 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:44:08.574919 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.575074 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:44:08.575286 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:44:08.575427 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:44:08.575551 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:44:08.585902 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40045
	I0729 14:44:08.586319 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.586778 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.586803 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.587135 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.587357 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:44:08.588606 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:44:08.588827 1038758 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 14:44:08.588844 1038758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 14:44:08.588861 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:44:08.591169 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.591434 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:44:08.591466 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.591600 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:44:08.591766 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:44:08.591873 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:44:08.592103 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:44:08.752015 1038758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:44:08.775498 1038758 node_ready.go:35] waiting up to 6m0s for node "no-preload-603534" to be "Ready" ...
	I0729 14:44:08.788547 1038758 node_ready.go:49] node "no-preload-603534" has status "Ready":"True"
	I0729 14:44:08.788572 1038758 node_ready.go:38] duration metric: took 13.040411ms for node "no-preload-603534" to be "Ready" ...
	I0729 14:44:08.788582 1038758 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:44:08.793475 1038758 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:08.861468 1038758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:44:08.869542 1038758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 14:44:08.869567 1038758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 14:44:08.898398 1038758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 14:44:08.911120 1038758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 14:44:08.911148 1038758 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 14:44:08.931151 1038758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:44:08.931179 1038758 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 14:44:08.976093 1038758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:44:09.449857 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.449885 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.449863 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.449958 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.450343 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.450354 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.450361 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.450373 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.450374 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.450389 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.450442 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.450455 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.450476 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.450487 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.450620 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.450635 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.450637 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.450779 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.450799 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.493934 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.493959 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.494303 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.494320 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.494342 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.706038 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.706072 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.706366 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.706382 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.706391 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.706398 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.707956 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.707958 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.707986 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.708015 1038758 addons.go:475] Verifying addon metrics-server=true in "no-preload-603534"
	I0729 14:44:09.709729 1038758 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 14:44:09.711283 1038758 addons.go:510] duration metric: took 1.186289164s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 14:44:10.807976 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"False"
	I0729 14:44:13.300325 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"False"
	I0729 14:44:15.800886 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"False"
	I0729 14:44:18.300042 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"False"
	I0729 14:44:18.800080 1038758 pod_ready.go:92] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.800111 1038758 pod_ready.go:81] duration metric: took 10.006613711s for pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.800124 1038758 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-vn8z4" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.804949 1038758 pod_ready.go:92] pod "coredns-5cfdc65f69-vn8z4" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.804974 1038758 pod_ready.go:81] duration metric: took 4.840477ms for pod "coredns-5cfdc65f69-vn8z4" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.804985 1038758 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.810160 1038758 pod_ready.go:92] pod "etcd-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.810176 1038758 pod_ready.go:81] duration metric: took 5.184516ms for pod "etcd-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.810185 1038758 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.814785 1038758 pod_ready.go:92] pod "kube-apiserver-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.814807 1038758 pod_ready.go:81] duration metric: took 4.615516ms for pod "kube-apiserver-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.814819 1038758 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.819023 1038758 pod_ready.go:92] pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.819044 1038758 pod_ready.go:81] duration metric: took 4.215656ms for pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.819056 1038758 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7mr4z" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:19.198226 1038758 pod_ready.go:92] pod "kube-proxy-7mr4z" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:19.198252 1038758 pod_ready.go:81] duration metric: took 379.18928ms for pod "kube-proxy-7mr4z" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:19.198265 1038758 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:19.598783 1038758 pod_ready.go:92] pod "kube-scheduler-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:19.598824 1038758 pod_ready.go:81] duration metric: took 400.55255ms for pod "kube-scheduler-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:19.598835 1038758 pod_ready.go:38] duration metric: took 10.810240266s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:44:19.598865 1038758 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:44:19.598931 1038758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:44:19.615165 1038758 api_server.go:72] duration metric: took 11.090236578s to wait for apiserver process to appear ...
	I0729 14:44:19.615190 1038758 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:44:19.615211 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:44:19.619574 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0729 14:44:19.620586 1038758 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 14:44:19.620610 1038758 api_server.go:131] duration metric: took 5.412598ms to wait for apiserver health ...
	I0729 14:44:19.620620 1038758 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:44:19.802376 1038758 system_pods.go:59] 9 kube-system pods found
	I0729 14:44:19.802408 1038758 system_pods.go:61] "coredns-5cfdc65f69-m6q8r" [b3a0c38d-1587-4fdf-b2e6-58d364ca400b] Running
	I0729 14:44:19.802415 1038758 system_pods.go:61] "coredns-5cfdc65f69-vn8z4" [4654aadf-7870-46b6-96e6-5948239fbe22] Running
	I0729 14:44:19.802420 1038758 system_pods.go:61] "etcd-no-preload-603534" [01737765-56ad-4305-aa98-d531dd1fadb4] Running
	I0729 14:44:19.802429 1038758 system_pods.go:61] "kube-apiserver-no-preload-603534" [141fffbe-df4b-4de1-9d78-f1acf0b837a6] Running
	I0729 14:44:19.802434 1038758 system_pods.go:61] "kube-controller-manager-no-preload-603534" [39c980ec-50f7-4af1-b931-1a446775c934] Running
	I0729 14:44:19.802441 1038758 system_pods.go:61] "kube-proxy-7mr4z" [17de173c-2b95-4b35-a9d7-b38f065270cb] Running
	I0729 14:44:19.802446 1038758 system_pods.go:61] "kube-scheduler-no-preload-603534" [8d896d6c-43b9-4bc8-9994-41b0bd4b636d] Running
	I0729 14:44:19.802454 1038758 system_pods.go:61] "metrics-server-78fcd8795b-852x6" [637fea9b-2924-4593-a4e2-99a33ab613d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:44:19.802470 1038758 system_pods.go:61] "storage-provisioner" [7336eb38-d53d-4456-8367-cf843abe5cb5] Running
	I0729 14:44:19.802482 1038758 system_pods.go:74] duration metric: took 181.853357ms to wait for pod list to return data ...
	I0729 14:44:19.802491 1038758 default_sa.go:34] waiting for default service account to be created ...
	I0729 14:44:19.998312 1038758 default_sa.go:45] found service account: "default"
	I0729 14:44:19.998348 1038758 default_sa.go:55] duration metric: took 195.845187ms for default service account to be created ...
	I0729 14:44:19.998361 1038758 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 14:44:20.201742 1038758 system_pods.go:86] 9 kube-system pods found
	I0729 14:44:20.201778 1038758 system_pods.go:89] "coredns-5cfdc65f69-m6q8r" [b3a0c38d-1587-4fdf-b2e6-58d364ca400b] Running
	I0729 14:44:20.201787 1038758 system_pods.go:89] "coredns-5cfdc65f69-vn8z4" [4654aadf-7870-46b6-96e6-5948239fbe22] Running
	I0729 14:44:20.201793 1038758 system_pods.go:89] "etcd-no-preload-603534" [01737765-56ad-4305-aa98-d531dd1fadb4] Running
	I0729 14:44:20.201800 1038758 system_pods.go:89] "kube-apiserver-no-preload-603534" [141fffbe-df4b-4de1-9d78-f1acf0b837a6] Running
	I0729 14:44:20.201807 1038758 system_pods.go:89] "kube-controller-manager-no-preload-603534" [39c980ec-50f7-4af1-b931-1a446775c934] Running
	I0729 14:44:20.201812 1038758 system_pods.go:89] "kube-proxy-7mr4z" [17de173c-2b95-4b35-a9d7-b38f065270cb] Running
	I0729 14:44:20.201818 1038758 system_pods.go:89] "kube-scheduler-no-preload-603534" [8d896d6c-43b9-4bc8-9994-41b0bd4b636d] Running
	I0729 14:44:20.201826 1038758 system_pods.go:89] "metrics-server-78fcd8795b-852x6" [637fea9b-2924-4593-a4e2-99a33ab613d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:44:20.201835 1038758 system_pods.go:89] "storage-provisioner" [7336eb38-d53d-4456-8367-cf843abe5cb5] Running
	I0729 14:44:20.201850 1038758 system_pods.go:126] duration metric: took 203.481528ms to wait for k8s-apps to be running ...
	I0729 14:44:20.201860 1038758 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 14:44:20.201914 1038758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:44:20.217416 1038758 system_svc.go:56] duration metric: took 15.543768ms WaitForService to wait for kubelet
	I0729 14:44:20.217445 1038758 kubeadm.go:582] duration metric: took 11.692521258s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:44:20.217464 1038758 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:44:20.398667 1038758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:44:20.398696 1038758 node_conditions.go:123] node cpu capacity is 2
	I0729 14:44:20.398708 1038758 node_conditions.go:105] duration metric: took 181.238886ms to run NodePressure ...
	I0729 14:44:20.398720 1038758 start.go:241] waiting for startup goroutines ...
	I0729 14:44:20.398727 1038758 start.go:246] waiting for cluster config update ...
	I0729 14:44:20.398738 1038758 start.go:255] writing updated cluster config ...
	I0729 14:44:20.399014 1038758 ssh_runner.go:195] Run: rm -f paused
	I0729 14:44:20.452187 1038758 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 14:44:20.454434 1038758 out.go:177] * Done! kubectl is now configured to use "no-preload-603534" cluster and "default" namespace by default
	I0729 14:44:40.130597 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:44:40.130831 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:44:40.130848 1039759 kubeadm.go:310] 
	I0729 14:44:40.130903 1039759 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 14:44:40.130956 1039759 kubeadm.go:310] 		timed out waiting for the condition
	I0729 14:44:40.130966 1039759 kubeadm.go:310] 
	I0729 14:44:40.131032 1039759 kubeadm.go:310] 	This error is likely caused by:
	I0729 14:44:40.131110 1039759 kubeadm.go:310] 		- The kubelet is not running
	I0729 14:44:40.131256 1039759 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 14:44:40.131270 1039759 kubeadm.go:310] 
	I0729 14:44:40.131450 1039759 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 14:44:40.131499 1039759 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 14:44:40.131542 1039759 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 14:44:40.131552 1039759 kubeadm.go:310] 
	I0729 14:44:40.131686 1039759 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 14:44:40.131795 1039759 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 14:44:40.131806 1039759 kubeadm.go:310] 
	I0729 14:44:40.131947 1039759 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 14:44:40.132064 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 14:44:40.132162 1039759 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 14:44:40.132254 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 14:44:40.132264 1039759 kubeadm.go:310] 
	I0729 14:44:40.133208 1039759 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:44:40.133363 1039759 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 14:44:40.133468 1039759 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 14:44:40.133610 1039759 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 14:44:40.133676 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:44:40.607039 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:44:40.623771 1039759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:44:40.636278 1039759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:44:40.636310 1039759 kubeadm.go:157] found existing configuration files:
	
	I0729 14:44:40.636371 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:44:40.647768 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:44:40.647827 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:44:40.658281 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:44:40.668393 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:44:40.668477 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:44:40.678521 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:44:40.687891 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:44:40.687960 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:44:40.698384 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:44:40.708965 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:44:40.709047 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:44:40.719665 1039759 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:44:40.796786 1039759 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 14:44:40.796883 1039759 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:44:40.946106 1039759 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:44:40.946258 1039759 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:44:40.946388 1039759 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 14:44:41.140483 1039759 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:44:41.142390 1039759 out.go:204]   - Generating certificates and keys ...
	I0729 14:44:41.142503 1039759 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:44:41.142610 1039759 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:44:41.142722 1039759 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:44:41.142811 1039759 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:44:41.142910 1039759 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:44:41.142995 1039759 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:44:41.143092 1039759 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:44:41.143180 1039759 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:44:41.143279 1039759 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:44:41.143390 1039759 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:44:41.143445 1039759 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:44:41.143524 1039759 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:44:41.188854 1039759 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:44:41.329957 1039759 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:44:41.968599 1039759 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:44:42.034788 1039759 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:44:42.055543 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:44:42.056622 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:44:42.056715 1039759 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:44:42.204165 1039759 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:44:42.205935 1039759 out.go:204]   - Booting up control plane ...
	I0729 14:44:42.206076 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:44:42.216259 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:44:42.217947 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:44:42.219361 1039759 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:44:42.221672 1039759 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 14:45:22.223830 1039759 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 14:45:22.223940 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:22.224139 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:45:27.224303 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:27.224574 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:45:37.224905 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:37.225090 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:45:57.226285 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:57.226533 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:46:37.227279 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:46:37.227485 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:46:37.227494 1039759 kubeadm.go:310] 
	I0729 14:46:37.227528 1039759 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 14:46:37.227605 1039759 kubeadm.go:310] 		timed out waiting for the condition
	I0729 14:46:37.227627 1039759 kubeadm.go:310] 
	I0729 14:46:37.227683 1039759 kubeadm.go:310] 	This error is likely caused by:
	I0729 14:46:37.227732 1039759 kubeadm.go:310] 		- The kubelet is not running
	I0729 14:46:37.227861 1039759 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 14:46:37.227870 1039759 kubeadm.go:310] 
	I0729 14:46:37.228011 1039759 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 14:46:37.228093 1039759 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 14:46:37.228140 1039759 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 14:46:37.228173 1039759 kubeadm.go:310] 
	I0729 14:46:37.228310 1039759 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 14:46:37.228443 1039759 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 14:46:37.228454 1039759 kubeadm.go:310] 
	I0729 14:46:37.228606 1039759 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 14:46:37.228714 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 14:46:37.228821 1039759 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 14:46:37.228913 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 14:46:37.228934 1039759 kubeadm.go:310] 
	I0729 14:46:37.229926 1039759 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:46:37.230070 1039759 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 14:46:37.230175 1039759 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 14:46:37.230284 1039759 kubeadm.go:394] duration metric: took 7m57.608522587s to StartCluster
	I0729 14:46:37.230347 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:46:37.230435 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:46:37.276238 1039759 cri.go:89] found id: ""
	I0729 14:46:37.276294 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.276304 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:46:37.276317 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:46:37.276439 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:46:37.309934 1039759 cri.go:89] found id: ""
	I0729 14:46:37.309960 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.309969 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:46:37.309975 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:46:37.310031 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:46:37.343286 1039759 cri.go:89] found id: ""
	I0729 14:46:37.343312 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.343320 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:46:37.343325 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:46:37.343384 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:46:37.378735 1039759 cri.go:89] found id: ""
	I0729 14:46:37.378763 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.378773 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:46:37.378779 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:46:37.378834 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:46:37.414244 1039759 cri.go:89] found id: ""
	I0729 14:46:37.414275 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.414284 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:46:37.414290 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:46:37.414372 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:46:37.453809 1039759 cri.go:89] found id: ""
	I0729 14:46:37.453842 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.453858 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:46:37.453866 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:46:37.453955 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:46:37.492250 1039759 cri.go:89] found id: ""
	I0729 14:46:37.492279 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.492288 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:46:37.492294 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:46:37.492360 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:46:37.554342 1039759 cri.go:89] found id: ""
	I0729 14:46:37.554377 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.554388 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:46:37.554404 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:46:37.554422 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:46:37.631118 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:46:37.631165 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:46:37.650991 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:46:37.651047 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:46:37.731852 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:46:37.731880 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:46:37.731897 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:46:37.849049 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:46:37.849092 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0729 14:46:37.893957 1039759 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 14:46:37.894031 1039759 out.go:239] * 
	W0729 14:46:37.894120 1039759 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 14:46:37.894150 1039759 out.go:239] * 
	W0729 14:46:37.895278 1039759 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 14:46:37.898735 1039759 out.go:177] 
	W0729 14:46:37.900049 1039759 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 14:46:37.900115 1039759 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 14:46:37.900146 1039759 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 14:46:37.901531 1039759 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 14:52:40 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:52:40.772579379Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:455ca59b19fdef5f80bb3dd1ed71e3a3c184b470577f1a963c314cd39f651730,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-z9wg5,Uid:f022dfec-8e97-4679-a7dd-739c9231af82,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722264216444932887,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-z9wg5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f022dfec-8e97-4679-a7dd-739c9231af82,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T14:43:36.128356264Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bde11caeb61a241211e7250335ce11ba8f256ef25d308e4728252384cc6b8405,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:a8bf282a-27e8-43f9-a2ac-af60
00a4decc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722264216428582204,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8bf282a-27e8-43f9-a2ac-af6000a4decc,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provision
er\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T14:43:35.811480034Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2c44d4c4d31e4885b86d2882ca9c0804d7ba8f33ea1f2e8507d279cdcac60e1b,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-zxmwx,Uid:13b78c9b-97dc-4313-92d1-76fab481b276,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722264215935978106,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxmwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b78c9b-97dc-4313-92d1-76fab481b276,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T14:43:34.705194640Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:41e6b7d9f82ab11b5fc1f2b8480adb855cc1f9d414597594e86cd8c8fccbb5f2,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-7qhqh,Uid:88941d43
-c67d-4190-896c-edfc4c96b9a8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722264215875827523,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-7qhqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88941d43-c67d-4190-896c-edfc4c96b9a8,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T14:43:34.669242856Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2d44b9233bf1361e08ff867e05a0664ea54aecf54a2689c95912016cade1684f,Metadata:&PodSandboxMetadata{Name:kube-proxy-tqtjx,Uid:bd100e13-d714-4ddb-ba43-44be43035b3f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722264214828999317,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tqtjx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd100e13-d714-4ddb-ba43-44be43035b3f,k8s-app: kube-pro
xy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T14:43:34.502234858Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1f4c7d480f8519a8ca59cbcfe91b153ae0fcdb08d1da1269cae1621e74489600,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-751306,Uid:c77500357cb96f62f4b1d5e33dd3b234,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722264195441862841,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c77500357cb96f62f4b1d5e33dd3b234,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c77500357cb96f62f4b1d5e33dd3b234,kubernetes.io/config.seen: 2024-07-29T14:43:14.997235072Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2bd103026510ce3c2ef7366871f3a71055e2b8fab7052bad30484dd82883a127,Metadata:&PodSandb
oxMetadata{Name:kube-controller-manager-default-k8s-diff-port-751306,Uid:a705fe5bc92d59a4f4ff0e77713908eb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722264195427873654,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a705fe5bc92d59a4f4ff0e77713908eb,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a705fe5bc92d59a4f4ff0e77713908eb,kubernetes.io/config.seen: 2024-07-29T14:43:14.997234170Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:61aa2726ce2814ab416eccb78aa526a00aa7545bd399784ed431c00498b871fe,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-751306,Uid:0576d2cd711613b3730e4289c9117d50,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722264195424689950,Labels:map[string]string{component: kube-apiserver,io.kubernete
s.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0576d2cd711613b3730e4289c9117d50,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.233:8444,kubernetes.io/config.hash: 0576d2cd711613b3730e4289c9117d50,kubernetes.io/config.seen: 2024-07-29T14:43:14.997232771Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:626217ee90597f16d8cab9f78137260e2fa38b9a2cf5a2755c797cc5db544bd8,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-751306,Uid:bb851f12643318c164e97cb88a8f291b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722264195423448042,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb851f12643318c164e97cb88a8f291b,tier: control-plane,},Annotat
ions:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.233:2379,kubernetes.io/config.hash: bb851f12643318c164e97cb88a8f291b,kubernetes.io/config.seen: 2024-07-29T14:43:14.997229276Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=b293342d-e122-4b80-aab2-13474b0b7c54 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 14:52:40 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:52:40.773497117Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=24e6443c-6090-463b-88b7-67633277b2d7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:52:40 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:52:40.773546208Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=24e6443c-6090-463b-88b7-67633277b2d7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:52:40 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:52:40.774022512Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdd9ff82307b099f776bfa4dac869b5b9dacc92f558831d274af5627900dbce1,PodSandboxId:bde11caeb61a241211e7250335ce11ba8f256ef25d308e4728252384cc6b8405,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722264216772966033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8bf282a-27e8-43f9-a2ac-af6000a4decc,},Annotations:map[string]string{io.kubernetes.container.hash: 563fca7f,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a8915e6c345a50c54e0c4be67595fa294182ca8b3760d4d705b094cfba1128,PodSandboxId:2c44d4c4d31e4885b86d2882ca9c0804d7ba8f33ea1f2e8507d279cdcac60e1b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264216431137276,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxmwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b78c9b-97dc-4313-92d1-76fab481b276,},Annotations:map[string]string{io.kubernetes.container.hash: 6f594bdd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c67adc88ce9352e7b92fbb3c3fe544031541740cc373676086afdc52e3fad7a,PodSandboxId:41e6b7d9f82ab11b5fc1f2b8480adb855cc1f9d414597594e86cd8c8fccbb5f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264216357683411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7qhqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 88941d43-c67d-4190-896c-edfc4c96b9a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6932c87,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a900fda5b73988982b89265c41acdb4ff78f9df6d010ba1115310a197a72bb49,PodSandboxId:2d44b9233bf1361e08ff867e05a0664ea54aecf54a2689c95912016cade1684f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,
CreatedAt:1722264215064916832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tqtjx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd100e13-d714-4ddb-ba43-44be43035b3f,},Annotations:map[string]string{io.kubernetes.container.hash: ff69ba70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374debdcd43ec5f9922cc1fc8045ba8ec8b2735c815c5287104bb35e338b6203,PodSandboxId:626217ee90597f16d8cab9f78137260e2fa38b9a2cf5a2755c797cc5db544bd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722264195674414791,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb851f12643318c164e97cb88a8f291b,},Annotations:map[string]string{io.kubernetes.container.hash: f375b443,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d29c63a72f53bcd28b18db96b1bda62d0f2eb3f54660499b2aeb5a74c1573a66,PodSandboxId:1f4c7d480f8519a8ca59cbcfe91b153ae0fcdb08d1da1269cae1621e74489600,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722264195674573378,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c77500357cb96f62f4b1d5e33dd3b234,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8e9a6a0684ae10c3a5ab4d6e2cea31a31b8de3dfb544135fc3b57109dfc74c4,PodSandboxId:2bd103026510ce3c2ef7366871f3a71055e2b8fab7052bad30484dd82883a127,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722264195651073789,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a705fe5bc92d59a4f4ff0e77713908eb,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a77e374a9600351c778ccb777afea2c840723fe748ad9ec3447c51c216d0e5,PodSandboxId:61aa2726ce2814ab416eccb78aa526a00aa7545bd399784ed431c00498b871fe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722264195603680804,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0576d2cd711613b3730e4289c9117d50,},Annotations:map[string]string{io.kubernetes.container.hash: cec89149,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=24e6443c-6090-463b-88b7-67633277b2d7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:52:40 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:52:40.812852779Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3275c767-b002-429d-a892-efb94bf64cf3 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:52:40 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:52:40.812974497Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3275c767-b002-429d-a892-efb94bf64cf3 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:52:40 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:52:40.813967003Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5a8173d4-7aa3-4c63-add6-08718ef2c4d0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:52:40 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:52:40.814512483Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722264760814486593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a8173d4-7aa3-4c63-add6-08718ef2c4d0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:52:40 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:52:40.815031084Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c6009b65-8f5a-40b0-9dcf-83bb3ab85b05 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:52:40 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:52:40.815096793Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c6009b65-8f5a-40b0-9dcf-83bb3ab85b05 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:52:40 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:52:40.815322179Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdd9ff82307b099f776bfa4dac869b5b9dacc92f558831d274af5627900dbce1,PodSandboxId:bde11caeb61a241211e7250335ce11ba8f256ef25d308e4728252384cc6b8405,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722264216772966033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8bf282a-27e8-43f9-a2ac-af6000a4decc,},Annotations:map[string]string{io.kubernetes.container.hash: 563fca7f,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a8915e6c345a50c54e0c4be67595fa294182ca8b3760d4d705b094cfba1128,PodSandboxId:2c44d4c4d31e4885b86d2882ca9c0804d7ba8f33ea1f2e8507d279cdcac60e1b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264216431137276,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxmwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b78c9b-97dc-4313-92d1-76fab481b276,},Annotations:map[string]string{io.kubernetes.container.hash: 6f594bdd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c67adc88ce9352e7b92fbb3c3fe544031541740cc373676086afdc52e3fad7a,PodSandboxId:41e6b7d9f82ab11b5fc1f2b8480adb855cc1f9d414597594e86cd8c8fccbb5f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264216357683411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7qhqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 88941d43-c67d-4190-896c-edfc4c96b9a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6932c87,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a900fda5b73988982b89265c41acdb4ff78f9df6d010ba1115310a197a72bb49,PodSandboxId:2d44b9233bf1361e08ff867e05a0664ea54aecf54a2689c95912016cade1684f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,
CreatedAt:1722264215064916832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tqtjx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd100e13-d714-4ddb-ba43-44be43035b3f,},Annotations:map[string]string{io.kubernetes.container.hash: ff69ba70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374debdcd43ec5f9922cc1fc8045ba8ec8b2735c815c5287104bb35e338b6203,PodSandboxId:626217ee90597f16d8cab9f78137260e2fa38b9a2cf5a2755c797cc5db544bd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722264195674414791,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb851f12643318c164e97cb88a8f291b,},Annotations:map[string]string{io.kubernetes.container.hash: f375b443,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d29c63a72f53bcd28b18db96b1bda62d0f2eb3f54660499b2aeb5a74c1573a66,PodSandboxId:1f4c7d480f8519a8ca59cbcfe91b153ae0fcdb08d1da1269cae1621e74489600,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722264195674573378,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c77500357cb96f62f4b1d5e33dd3b234,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8e9a6a0684ae10c3a5ab4d6e2cea31a31b8de3dfb544135fc3b57109dfc74c4,PodSandboxId:2bd103026510ce3c2ef7366871f3a71055e2b8fab7052bad30484dd82883a127,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722264195651073789,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a705fe5bc92d59a4f4ff0e77713908eb,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a77e374a9600351c778ccb777afea2c840723fe748ad9ec3447c51c216d0e5,PodSandboxId:61aa2726ce2814ab416eccb78aa526a00aa7545bd399784ed431c00498b871fe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722264195603680804,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0576d2cd711613b3730e4289c9117d50,},Annotations:map[string]string{io.kubernetes.container.hash: cec89149,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c6009b65-8f5a-40b0-9dcf-83bb3ab85b05 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:52:40 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:52:40.853887897Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=44139ad3-c72f-4c80-9e79-6d5067e2cd2a name=/runtime.v1.RuntimeService/Version
	Jul 29 14:52:40 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:52:40.853981676Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=44139ad3-c72f-4c80-9e79-6d5067e2cd2a name=/runtime.v1.RuntimeService/Version
	Jul 29 14:52:40 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:52:40.855891242Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d9a50988-f971-47bf-aac2-c870eff049e7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:52:40 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:52:40.857594067Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722264760857567852,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9a50988-f971-47bf-aac2-c870eff049e7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:52:40 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:52:40.858385320Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=140bff3c-23c0-4d1a-9c5d-b872277e7651 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:52:40 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:52:40.858463580Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=140bff3c-23c0-4d1a-9c5d-b872277e7651 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:52:40 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:52:40.858749179Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdd9ff82307b099f776bfa4dac869b5b9dacc92f558831d274af5627900dbce1,PodSandboxId:bde11caeb61a241211e7250335ce11ba8f256ef25d308e4728252384cc6b8405,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722264216772966033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8bf282a-27e8-43f9-a2ac-af6000a4decc,},Annotations:map[string]string{io.kubernetes.container.hash: 563fca7f,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a8915e6c345a50c54e0c4be67595fa294182ca8b3760d4d705b094cfba1128,PodSandboxId:2c44d4c4d31e4885b86d2882ca9c0804d7ba8f33ea1f2e8507d279cdcac60e1b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264216431137276,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxmwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b78c9b-97dc-4313-92d1-76fab481b276,},Annotations:map[string]string{io.kubernetes.container.hash: 6f594bdd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c67adc88ce9352e7b92fbb3c3fe544031541740cc373676086afdc52e3fad7a,PodSandboxId:41e6b7d9f82ab11b5fc1f2b8480adb855cc1f9d414597594e86cd8c8fccbb5f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264216357683411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7qhqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 88941d43-c67d-4190-896c-edfc4c96b9a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6932c87,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a900fda5b73988982b89265c41acdb4ff78f9df6d010ba1115310a197a72bb49,PodSandboxId:2d44b9233bf1361e08ff867e05a0664ea54aecf54a2689c95912016cade1684f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,
CreatedAt:1722264215064916832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tqtjx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd100e13-d714-4ddb-ba43-44be43035b3f,},Annotations:map[string]string{io.kubernetes.container.hash: ff69ba70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374debdcd43ec5f9922cc1fc8045ba8ec8b2735c815c5287104bb35e338b6203,PodSandboxId:626217ee90597f16d8cab9f78137260e2fa38b9a2cf5a2755c797cc5db544bd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722264195674414791,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb851f12643318c164e97cb88a8f291b,},Annotations:map[string]string{io.kubernetes.container.hash: f375b443,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d29c63a72f53bcd28b18db96b1bda62d0f2eb3f54660499b2aeb5a74c1573a66,PodSandboxId:1f4c7d480f8519a8ca59cbcfe91b153ae0fcdb08d1da1269cae1621e74489600,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722264195674573378,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c77500357cb96f62f4b1d5e33dd3b234,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8e9a6a0684ae10c3a5ab4d6e2cea31a31b8de3dfb544135fc3b57109dfc74c4,PodSandboxId:2bd103026510ce3c2ef7366871f3a71055e2b8fab7052bad30484dd82883a127,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722264195651073789,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a705fe5bc92d59a4f4ff0e77713908eb,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a77e374a9600351c778ccb777afea2c840723fe748ad9ec3447c51c216d0e5,PodSandboxId:61aa2726ce2814ab416eccb78aa526a00aa7545bd399784ed431c00498b871fe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722264195603680804,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0576d2cd711613b3730e4289c9117d50,},Annotations:map[string]string{io.kubernetes.container.hash: cec89149,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=140bff3c-23c0-4d1a-9c5d-b872277e7651 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:52:40 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:52:40.895879676Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a103d3ff-0ec5-4f4f-b582-2522a434f5a5 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:52:40 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:52:40.895970028Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a103d3ff-0ec5-4f4f-b582-2522a434f5a5 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:52:40 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:52:40.896860438Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e5229868-5355-4f94-a74b-b141349e3528 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:52:40 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:52:40.897479082Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722264760897446374,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5229868-5355-4f94-a74b-b141349e3528 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:52:40 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:52:40.897994570Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8ce7bf26-0258-4d55-a364-5777dbce6ad9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:52:40 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:52:40.898061745Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8ce7bf26-0258-4d55-a364-5777dbce6ad9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:52:40 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:52:40.898360371Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdd9ff82307b099f776bfa4dac869b5b9dacc92f558831d274af5627900dbce1,PodSandboxId:bde11caeb61a241211e7250335ce11ba8f256ef25d308e4728252384cc6b8405,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722264216772966033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8bf282a-27e8-43f9-a2ac-af6000a4decc,},Annotations:map[string]string{io.kubernetes.container.hash: 563fca7f,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a8915e6c345a50c54e0c4be67595fa294182ca8b3760d4d705b094cfba1128,PodSandboxId:2c44d4c4d31e4885b86d2882ca9c0804d7ba8f33ea1f2e8507d279cdcac60e1b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264216431137276,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxmwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b78c9b-97dc-4313-92d1-76fab481b276,},Annotations:map[string]string{io.kubernetes.container.hash: 6f594bdd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c67adc88ce9352e7b92fbb3c3fe544031541740cc373676086afdc52e3fad7a,PodSandboxId:41e6b7d9f82ab11b5fc1f2b8480adb855cc1f9d414597594e86cd8c8fccbb5f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264216357683411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7qhqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 88941d43-c67d-4190-896c-edfc4c96b9a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6932c87,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a900fda5b73988982b89265c41acdb4ff78f9df6d010ba1115310a197a72bb49,PodSandboxId:2d44b9233bf1361e08ff867e05a0664ea54aecf54a2689c95912016cade1684f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,
CreatedAt:1722264215064916832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tqtjx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd100e13-d714-4ddb-ba43-44be43035b3f,},Annotations:map[string]string{io.kubernetes.container.hash: ff69ba70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374debdcd43ec5f9922cc1fc8045ba8ec8b2735c815c5287104bb35e338b6203,PodSandboxId:626217ee90597f16d8cab9f78137260e2fa38b9a2cf5a2755c797cc5db544bd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722264195674414791,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb851f12643318c164e97cb88a8f291b,},Annotations:map[string]string{io.kubernetes.container.hash: f375b443,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d29c63a72f53bcd28b18db96b1bda62d0f2eb3f54660499b2aeb5a74c1573a66,PodSandboxId:1f4c7d480f8519a8ca59cbcfe91b153ae0fcdb08d1da1269cae1621e74489600,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722264195674573378,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c77500357cb96f62f4b1d5e33dd3b234,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8e9a6a0684ae10c3a5ab4d6e2cea31a31b8de3dfb544135fc3b57109dfc74c4,PodSandboxId:2bd103026510ce3c2ef7366871f3a71055e2b8fab7052bad30484dd82883a127,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722264195651073789,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a705fe5bc92d59a4f4ff0e77713908eb,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a77e374a9600351c778ccb777afea2c840723fe748ad9ec3447c51c216d0e5,PodSandboxId:61aa2726ce2814ab416eccb78aa526a00aa7545bd399784ed431c00498b871fe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722264195603680804,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0576d2cd711613b3730e4289c9117d50,},Annotations:map[string]string{io.kubernetes.container.hash: cec89149,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8ce7bf26-0258-4d55-a364-5777dbce6ad9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bdd9ff82307b0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   bde11caeb61a2       storage-provisioner
	a3a8915e6c345       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   2c44d4c4d31e4       coredns-7db6d8ff4d-zxmwx
	1c67adc88ce93       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   41e6b7d9f82ab       coredns-7db6d8ff4d-7qhqh
	a900fda5b7398       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   9 minutes ago       Running             kube-proxy                0                   2d44b9233bf13       kube-proxy-tqtjx
	d29c63a72f53b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   9 minutes ago       Running             kube-scheduler            2                   1f4c7d480f851       kube-scheduler-default-k8s-diff-port-751306
	374debdcd43ec       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   626217ee90597       etcd-default-k8s-diff-port-751306
	f8e9a6a0684ae       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   9 minutes ago       Running             kube-controller-manager   2                   2bd103026510c       kube-controller-manager-default-k8s-diff-port-751306
	b6a77e374a960       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   9 minutes ago       Running             kube-apiserver            2                   61aa2726ce281       kube-apiserver-default-k8s-diff-port-751306
	
	
	==> coredns [1c67adc88ce9352e7b92fbb3c3fe544031541740cc373676086afdc52e3fad7a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [a3a8915e6c345a50c54e0c4be67595fa294182ca8b3760d4d705b094cfba1128] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-751306
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-751306
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=default-k8s-diff-port-751306
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T14_43_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 14:43:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-751306
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 14:52:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 14:48:47 +0000   Mon, 29 Jul 2024 14:43:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 14:48:47 +0000   Mon, 29 Jul 2024 14:43:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 14:48:47 +0000   Mon, 29 Jul 2024 14:43:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 14:48:47 +0000   Mon, 29 Jul 2024 14:43:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.233
	  Hostname:    default-k8s-diff-port-751306
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7c5d0f2c12df40eea13545c58de8c6ff
	  System UUID:                7c5d0f2c-12df-40ee-a135-45c58de8c6ff
	  Boot ID:                    ce2a1acc-c6b5-4b33-b8fe-c8d27e8b278f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-7qhqh                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-7db6d8ff4d-zxmwx                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-default-k8s-diff-port-751306                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-apiserver-default-k8s-diff-port-751306             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-751306    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-tqtjx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-default-k8s-diff-port-751306             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 metrics-server-569cc877fc-z9wg5                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m26s (x8 over 9m26s)  kubelet          Node default-k8s-diff-port-751306 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m26s (x8 over 9m26s)  kubelet          Node default-k8s-diff-port-751306 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m26s (x7 over 9m26s)  kubelet          Node default-k8s-diff-port-751306 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node default-k8s-diff-port-751306 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node default-k8s-diff-port-751306 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node default-k8s-diff-port-751306 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m7s                   node-controller  Node default-k8s-diff-port-751306 event: Registered Node default-k8s-diff-port-751306 in Controller
	
	
	==> dmesg <==
	[  +0.062201] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.052629] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jul29 14:38] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.403922] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.573185] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.297981] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.062685] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066894] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.193315] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.116147] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.305072] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[  +4.529224] systemd-fstab-generator[806]: Ignoring "noauto" option for root device
	[  +0.065971] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.151022] systemd-fstab-generator[928]: Ignoring "noauto" option for root device
	[  +5.664028] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.020312] kauditd_printk_skb: 79 callbacks suppressed
	[Jul29 14:42] kauditd_printk_skb: 3 callbacks suppressed
	[Jul29 14:43] systemd-fstab-generator[3562]: Ignoring "noauto" option for root device
	[  +4.613950] kauditd_printk_skb: 59 callbacks suppressed
	[  +1.449871] systemd-fstab-generator[3881]: Ignoring "noauto" option for root device
	[ +14.397876] systemd-fstab-generator[4100]: Ignoring "noauto" option for root device
	[  +0.084643] kauditd_printk_skb: 14 callbacks suppressed
	[Jul29 14:44] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [374debdcd43ec5f9922cc1fc8045ba8ec8b2735c815c5287104bb35e338b6203] <==
	{"level":"info","ts":"2024-07-29T14:43:16.02812Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"defc8511a11a071c switched to configuration voters=(16067863881314862876)"}
	{"level":"info","ts":"2024-07-29T14:43:16.028242Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"85608bfa40f43412","local-member-id":"defc8511a11a071c","added-peer-id":"defc8511a11a071c","added-peer-peer-urls":["https://192.168.72.233:2380"]}
	{"level":"info","ts":"2024-07-29T14:43:16.040163Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T14:43:16.040453Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"defc8511a11a071c","initial-advertise-peer-urls":["https://192.168.72.233:2380"],"listen-peer-urls":["https://192.168.72.233:2380"],"advertise-client-urls":["https://192.168.72.233:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.233:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T14:43:16.040516Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T14:43:16.040633Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.233:2380"}
	{"level":"info","ts":"2024-07-29T14:43:16.040665Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.233:2380"}
	{"level":"info","ts":"2024-07-29T14:43:16.392389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"defc8511a11a071c is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T14:43:16.392508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"defc8511a11a071c became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T14:43:16.392609Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"defc8511a11a071c received MsgPreVoteResp from defc8511a11a071c at term 1"}
	{"level":"info","ts":"2024-07-29T14:43:16.392648Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"defc8511a11a071c became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T14:43:16.392673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"defc8511a11a071c received MsgVoteResp from defc8511a11a071c at term 2"}
	{"level":"info","ts":"2024-07-29T14:43:16.3927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"defc8511a11a071c became leader at term 2"}
	{"level":"info","ts":"2024-07-29T14:43:16.392725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: defc8511a11a071c elected leader defc8511a11a071c at term 2"}
	{"level":"info","ts":"2024-07-29T14:43:16.397445Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:43:16.399576Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"defc8511a11a071c","local-member-attributes":"{Name:default-k8s-diff-port-751306 ClientURLs:[https://192.168.72.233:2379]}","request-path":"/0/members/defc8511a11a071c/attributes","cluster-id":"85608bfa40f43412","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T14:43:16.399794Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T14:43:16.400193Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T14:43:16.40232Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"85608bfa40f43412","local-member-id":"defc8511a11a071c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:43:16.408368Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:43:16.411341Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:43:16.402396Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T14:43:16.411413Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T14:43:16.405823Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.233:2379"}
	{"level":"info","ts":"2024-07-29T14:43:16.414898Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 14:52:41 up 14 min,  0 users,  load average: 0.24, 0.19, 0.15
	Linux default-k8s-diff-port-751306 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b6a77e374a9600351c778ccb777afea2c840723fe748ad9ec3447c51c216d0e5] <==
	I0729 14:46:36.858712       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 14:48:18.266216       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 14:48:18.266391       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0729 14:48:19.266754       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 14:48:19.266848       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 14:48:19.266874       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 14:48:19.266940       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 14:48:19.267005       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 14:48:19.268186       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 14:49:19.267371       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 14:49:19.267455       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 14:49:19.267465       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 14:49:19.268645       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 14:49:19.268723       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 14:49:19.268730       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 14:51:19.268123       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 14:51:19.268508       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 14:51:19.268583       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 14:51:19.269788       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 14:51:19.269890       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 14:51:19.269916       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [f8e9a6a0684ae10c3a5ab4d6e2cea31a31b8de3dfb544135fc3b57109dfc74c4] <==
	I0729 14:47:05.091701       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:47:34.647992       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:47:35.099563       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:48:04.653152       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:48:05.108394       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:48:34.658861       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:48:35.120690       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:49:04.665143       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:49:05.128175       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 14:49:30.785344       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="309.691µs"
	E0729 14:49:34.671124       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:49:35.137097       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 14:49:45.768910       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="86.169µs"
	E0729 14:50:04.676083       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:50:05.145808       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:50:34.681429       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:50:35.165941       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:51:04.687488       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:51:05.176438       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:51:34.698129       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:51:35.188233       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:52:04.703191       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:52:05.195988       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:52:34.709189       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:52:35.219055       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a900fda5b73988982b89265c41acdb4ff78f9df6d010ba1115310a197a72bb49] <==
	I0729 14:43:35.270824       1 server_linux.go:69] "Using iptables proxy"
	I0729 14:43:35.283381       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.233"]
	I0729 14:43:35.360745       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 14:43:35.360797       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 14:43:35.360814       1 server_linux.go:165] "Using iptables Proxier"
	I0729 14:43:35.366667       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 14:43:35.366938       1 server.go:872] "Version info" version="v1.30.3"
	I0729 14:43:35.366976       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 14:43:35.375130       1 config.go:192] "Starting service config controller"
	I0729 14:43:35.375168       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 14:43:35.375190       1 config.go:101] "Starting endpoint slice config controller"
	I0729 14:43:35.375193       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 14:43:35.375665       1 config.go:319] "Starting node config controller"
	I0729 14:43:35.375692       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 14:43:35.475432       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 14:43:35.475493       1 shared_informer.go:320] Caches are synced for service config
	I0729 14:43:35.476184       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d29c63a72f53bcd28b18db96b1bda62d0f2eb3f54660499b2aeb5a74c1573a66] <==
	W0729 14:43:18.314461       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 14:43:18.314487       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 14:43:18.314526       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 14:43:18.314551       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 14:43:18.314590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 14:43:18.314615       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 14:43:19.156682       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 14:43:19.156827       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 14:43:19.233212       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 14:43:19.233306       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 14:43:19.291359       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 14:43:19.291408       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 14:43:19.307074       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 14:43:19.307120       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 14:43:19.337754       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 14:43:19.337841       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 14:43:19.441874       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 14:43:19.441922       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 14:43:19.539799       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 14:43:19.539846       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 14:43:19.568729       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 14:43:19.568784       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 14:43:19.600068       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 14:43:19.600117       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0729 14:43:22.403370       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 14:50:20 default-k8s-diff-port-751306 kubelet[3888]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 14:50:20 default-k8s-diff-port-751306 kubelet[3888]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 14:50:20 default-k8s-diff-port-751306 kubelet[3888]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 14:50:23 default-k8s-diff-port-751306 kubelet[3888]: E0729 14:50:23.754511    3888 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-z9wg5" podUID="f022dfec-8e97-4679-a7dd-739c9231af82"
	Jul 29 14:50:35 default-k8s-diff-port-751306 kubelet[3888]: E0729 14:50:35.754175    3888 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-z9wg5" podUID="f022dfec-8e97-4679-a7dd-739c9231af82"
	Jul 29 14:50:47 default-k8s-diff-port-751306 kubelet[3888]: E0729 14:50:47.754583    3888 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-z9wg5" podUID="f022dfec-8e97-4679-a7dd-739c9231af82"
	Jul 29 14:50:59 default-k8s-diff-port-751306 kubelet[3888]: E0729 14:50:59.754318    3888 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-z9wg5" podUID="f022dfec-8e97-4679-a7dd-739c9231af82"
	Jul 29 14:51:12 default-k8s-diff-port-751306 kubelet[3888]: E0729 14:51:12.754022    3888 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-z9wg5" podUID="f022dfec-8e97-4679-a7dd-739c9231af82"
	Jul 29 14:51:20 default-k8s-diff-port-751306 kubelet[3888]: E0729 14:51:20.786537    3888 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 14:51:20 default-k8s-diff-port-751306 kubelet[3888]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 14:51:20 default-k8s-diff-port-751306 kubelet[3888]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 14:51:20 default-k8s-diff-port-751306 kubelet[3888]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 14:51:20 default-k8s-diff-port-751306 kubelet[3888]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 14:51:24 default-k8s-diff-port-751306 kubelet[3888]: E0729 14:51:24.755187    3888 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-z9wg5" podUID="f022dfec-8e97-4679-a7dd-739c9231af82"
	Jul 29 14:51:35 default-k8s-diff-port-751306 kubelet[3888]: E0729 14:51:35.755034    3888 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-z9wg5" podUID="f022dfec-8e97-4679-a7dd-739c9231af82"
	Jul 29 14:51:47 default-k8s-diff-port-751306 kubelet[3888]: E0729 14:51:47.754770    3888 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-z9wg5" podUID="f022dfec-8e97-4679-a7dd-739c9231af82"
	Jul 29 14:51:59 default-k8s-diff-port-751306 kubelet[3888]: E0729 14:51:59.755177    3888 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-z9wg5" podUID="f022dfec-8e97-4679-a7dd-739c9231af82"
	Jul 29 14:52:13 default-k8s-diff-port-751306 kubelet[3888]: E0729 14:52:13.754524    3888 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-z9wg5" podUID="f022dfec-8e97-4679-a7dd-739c9231af82"
	Jul 29 14:52:20 default-k8s-diff-port-751306 kubelet[3888]: E0729 14:52:20.785032    3888 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 14:52:20 default-k8s-diff-port-751306 kubelet[3888]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 14:52:20 default-k8s-diff-port-751306 kubelet[3888]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 14:52:20 default-k8s-diff-port-751306 kubelet[3888]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 14:52:20 default-k8s-diff-port-751306 kubelet[3888]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 14:52:25 default-k8s-diff-port-751306 kubelet[3888]: E0729 14:52:25.754562    3888 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-z9wg5" podUID="f022dfec-8e97-4679-a7dd-739c9231af82"
	Jul 29 14:52:39 default-k8s-diff-port-751306 kubelet[3888]: E0729 14:52:39.755170    3888 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-z9wg5" podUID="f022dfec-8e97-4679-a7dd-739c9231af82"
	
	
	==> storage-provisioner [bdd9ff82307b099f776bfa4dac869b5b9dacc92f558831d274af5627900dbce1] <==
	I0729 14:43:36.898474       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 14:43:36.917969       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 14:43:36.918214       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 14:43:36.956215       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 14:43:36.957012       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-751306_c62c2e71-0363-41ed-9cf8-6f9e32f048cc!
	I0729 14:43:36.958805       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5710b067-6ce0-4fdf-b225-4caad0b7f64b", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-751306_c62c2e71-0363-41ed-9cf8-6f9e32f048cc became leader
	I0729 14:43:37.059373       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-751306_c62c2e71-0363-41ed-9cf8-6f9e32f048cc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-751306 -n default-k8s-diff-port-751306
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-751306 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-z9wg5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-751306 describe pod metrics-server-569cc877fc-z9wg5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-751306 describe pod metrics-server-569cc877fc-z9wg5: exit status 1 (64.933109ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-z9wg5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-751306 describe pod metrics-server-569cc877fc-z9wg5: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0729 14:44:30.662224  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
E0729 14:45:31.595972  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/auto-513289/client.crt: no such file or directory
E0729 14:46:10.159607  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kindnet-513289/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-603534 -n no-preload-603534
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-29 14:53:21.016085164 +0000 UTC m=+6087.423808934
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-603534 -n no-preload-603534
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-603534 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-603534 logs -n 25: (2.119022942s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-513289 sudo cat                             | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo                                 | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo                                 | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo                                 | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo find                            | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo crio                            | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-513289                                      | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	| delete  | -p                                                     | disable-driver-mounts-054967 | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | disable-driver-mounts-054967                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:31 UTC |
	|         | default-k8s-diff-port-751306                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-603534             | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:30 UTC | 29 Jul 24 14:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-603534                                   | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-668123            | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC | 29 Jul 24 14:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-668123                                  | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-751306  | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC | 29 Jul 24 14:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC |                     |
	|         | default-k8s-diff-port-751306                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-603534                  | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-603534 --memory=2200                     | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:32 UTC | 29 Jul 24 14:44 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-360866        | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:33 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-668123                 | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-668123                                  | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:33 UTC | 29 Jul 24 14:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-751306       | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC | 29 Jul 24 14:43 UTC |
	|         | default-k8s-diff-port-751306                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-360866                              | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC | 29 Jul 24 14:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-360866             | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC | 29 Jul 24 14:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-360866                              | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 14:34:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 14:34:53.874295 1039759 out.go:291] Setting OutFile to fd 1 ...
	I0729 14:34:53.874567 1039759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:34:53.874577 1039759 out.go:304] Setting ErrFile to fd 2...
	I0729 14:34:53.874580 1039759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:34:53.874774 1039759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 14:34:53.875294 1039759 out.go:298] Setting JSON to false
	I0729 14:34:53.876313 1039759 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":15446,"bootTime":1722248248,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 14:34:53.876373 1039759 start.go:139] virtualization: kvm guest
	I0729 14:34:53.878446 1039759 out.go:177] * [old-k8s-version-360866] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 14:34:53.879820 1039759 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 14:34:53.879855 1039759 notify.go:220] Checking for updates...
	I0729 14:34:53.882201 1039759 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 14:34:53.883330 1039759 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:34:53.884514 1039759 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:34:53.885734 1039759 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 14:34:53.886894 1039759 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 14:34:53.888361 1039759 config.go:182] Loaded profile config "old-k8s-version-360866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 14:34:53.888789 1039759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:34:53.888850 1039759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:34:53.903960 1039759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I0729 14:34:53.904467 1039759 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:34:53.905083 1039759 main.go:141] libmachine: Using API Version  1
	I0729 14:34:53.905112 1039759 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:34:53.905449 1039759 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:34:53.905609 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:34:53.907360 1039759 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 14:34:53.908710 1039759 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 14:34:53.909026 1039759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:34:53.909064 1039759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:34:53.923834 1039759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0729 14:34:53.924300 1039759 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:34:53.924787 1039759 main.go:141] libmachine: Using API Version  1
	I0729 14:34:53.924809 1039759 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:34:53.925150 1039759 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:34:53.925352 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:34:53.960368 1039759 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 14:34:53.961649 1039759 start.go:297] selected driver: kvm2
	I0729 14:34:53.961662 1039759 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:34:53.961778 1039759 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 14:34:53.962398 1039759 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:34:53.962459 1039759 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19338-974764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 14:34:53.977941 1039759 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 14:34:53.978311 1039759 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:34:53.978341 1039759 cni.go:84] Creating CNI manager for ""
	I0729 14:34:53.978350 1039759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:34:53.978395 1039759 start.go:340] cluster config:
	{Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:34:53.978499 1039759 iso.go:125] acquiring lock: {Name:mk2bc72146110e230952d77b90cad2ea8182c9d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:34:53.980167 1039759 out.go:177] * Starting "old-k8s-version-360866" primary control-plane node in "old-k8s-version-360866" cluster
	I0729 14:34:55.588663 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:34:53.981356 1039759 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 14:34:53.981390 1039759 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 14:34:53.981400 1039759 cache.go:56] Caching tarball of preloaded images
	I0729 14:34:53.981477 1039759 preload.go:172] Found /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 14:34:53.981487 1039759 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 14:34:53.981600 1039759 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/config.json ...
	I0729 14:34:53.981775 1039759 start.go:360] acquireMachinesLock for old-k8s-version-360866: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 14:34:58.660730 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:04.740665 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:07.812781 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:13.892659 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:16.964692 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:23.044749 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:26.116761 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:32.196730 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:35.268709 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:41.348712 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:44.420693 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:50.500715 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:53.572717 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:59.652707 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:02.724722 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:08.804719 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:11.876665 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:17.956684 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:21.028707 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:27.108667 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:30.180710 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:36.260645 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:39.332717 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:45.412694 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:48.484713 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:54.564703 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:57.636707 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:03.716690 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:06.788660 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:12.868658 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:15.940708 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:22.020684 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:25.092712 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:31.172710 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:34.177216 1039263 start.go:364] duration metric: took 3m42.890532077s to acquireMachinesLock for "embed-certs-668123"
	I0729 14:37:34.177291 1039263 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:37:34.177300 1039263 fix.go:54] fixHost starting: 
	I0729 14:37:34.177641 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:37:34.177673 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:37:34.193427 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37577
	I0729 14:37:34.193879 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:37:34.194396 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:37:34.194421 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:37:34.194774 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:37:34.194987 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:34.195156 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:37:34.196597 1039263 fix.go:112] recreateIfNeeded on embed-certs-668123: state=Stopped err=<nil>
	I0729 14:37:34.196642 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	W0729 14:37:34.196802 1039263 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:37:34.198564 1039263 out.go:177] * Restarting existing kvm2 VM for "embed-certs-668123" ...
	I0729 14:37:34.199926 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Start
	I0729 14:37:34.200086 1039263 main.go:141] libmachine: (embed-certs-668123) Ensuring networks are active...
	I0729 14:37:34.200833 1039263 main.go:141] libmachine: (embed-certs-668123) Ensuring network default is active
	I0729 14:37:34.201159 1039263 main.go:141] libmachine: (embed-certs-668123) Ensuring network mk-embed-certs-668123 is active
	I0729 14:37:34.201578 1039263 main.go:141] libmachine: (embed-certs-668123) Getting domain xml...
	I0729 14:37:34.202214 1039263 main.go:141] libmachine: (embed-certs-668123) Creating domain...
	I0729 14:37:34.510575 1039263 main.go:141] libmachine: (embed-certs-668123) Waiting to get IP...
	I0729 14:37:34.511459 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:34.511909 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:34.512006 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:34.511904 1040307 retry.go:31] will retry after 294.890973ms: waiting for machine to come up
	I0729 14:37:34.808513 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:34.809044 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:34.809070 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:34.809007 1040307 retry.go:31] will retry after 296.152247ms: waiting for machine to come up
	I0729 14:37:35.106423 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:35.106839 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:35.106872 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:35.106773 1040307 retry.go:31] will retry after 384.830082ms: waiting for machine to come up
	I0729 14:37:35.493463 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:35.493902 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:35.493933 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:35.493861 1040307 retry.go:31] will retry after 490.673812ms: waiting for machine to come up
	I0729 14:37:35.986675 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:35.987184 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:35.987235 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:35.987099 1040307 retry.go:31] will retry after 725.022775ms: waiting for machine to come up
	I0729 14:37:34.174673 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:37:34.174713 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:37:34.175060 1038758 buildroot.go:166] provisioning hostname "no-preload-603534"
	I0729 14:37:34.175084 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:37:34.175279 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:37:34.177042 1038758 machine.go:97] duration metric: took 4m37.39644293s to provisionDockerMachine
	I0729 14:37:34.177087 1038758 fix.go:56] duration metric: took 4m37.417815827s for fixHost
	I0729 14:37:34.177094 1038758 start.go:83] releasing machines lock for "no-preload-603534", held for 4m37.417912853s
	W0729 14:37:34.177127 1038758 start.go:714] error starting host: provision: host is not running
	W0729 14:37:34.177230 1038758 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 14:37:34.177240 1038758 start.go:729] Will try again in 5 seconds ...
	I0729 14:37:36.714078 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:36.714502 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:36.714565 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:36.714389 1040307 retry.go:31] will retry after 722.684756ms: waiting for machine to come up
	I0729 14:37:37.438316 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:37.438859 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:37.438891 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:37.438802 1040307 retry.go:31] will retry after 1.163999997s: waiting for machine to come up
	I0729 14:37:38.604109 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:38.604503 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:38.604531 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:38.604469 1040307 retry.go:31] will retry after 1.401566003s: waiting for machine to come up
	I0729 14:37:40.007310 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:40.007900 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:40.007929 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:40.007839 1040307 retry.go:31] will retry after 1.40470791s: waiting for machine to come up
	I0729 14:37:39.178982 1038758 start.go:360] acquireMachinesLock for no-preload-603534: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 14:37:41.414509 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:41.415018 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:41.415049 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:41.414959 1040307 retry.go:31] will retry after 2.205183048s: waiting for machine to come up
	I0729 14:37:43.623427 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:43.623894 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:43.623922 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:43.623856 1040307 retry.go:31] will retry after 2.444881913s: waiting for machine to come up
	I0729 14:37:46.070961 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:46.071314 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:46.071338 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:46.071271 1040307 retry.go:31] will retry after 3.115189863s: waiting for machine to come up
	I0729 14:37:49.187610 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:49.188107 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:49.188134 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:49.188054 1040307 retry.go:31] will retry after 3.139484284s: waiting for machine to come up
	I0729 14:37:53.653416 1039440 start.go:364] duration metric: took 3m41.12464482s to acquireMachinesLock for "default-k8s-diff-port-751306"
	I0729 14:37:53.653486 1039440 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:37:53.653494 1039440 fix.go:54] fixHost starting: 
	I0729 14:37:53.653880 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:37:53.653913 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:37:53.671499 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34797
	I0729 14:37:53.671927 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:37:53.672550 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:37:53.672584 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:37:53.672986 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:37:53.673198 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:37:53.673353 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:37:53.674706 1039440 fix.go:112] recreateIfNeeded on default-k8s-diff-port-751306: state=Stopped err=<nil>
	I0729 14:37:53.674736 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	W0729 14:37:53.674896 1039440 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:37:53.677098 1039440 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-751306" ...
	I0729 14:37:52.329477 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.329880 1039263 main.go:141] libmachine: (embed-certs-668123) Found IP for machine: 192.168.50.53
	I0729 14:37:52.329895 1039263 main.go:141] libmachine: (embed-certs-668123) Reserving static IP address...
	I0729 14:37:52.329906 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has current primary IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.330376 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "embed-certs-668123", mac: "52:54:00:a3:92:a4", ip: "192.168.50.53"} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.330414 1039263 main.go:141] libmachine: (embed-certs-668123) Reserved static IP address: 192.168.50.53
	I0729 14:37:52.330433 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | skip adding static IP to network mk-embed-certs-668123 - found existing host DHCP lease matching {name: "embed-certs-668123", mac: "52:54:00:a3:92:a4", ip: "192.168.50.53"}
	I0729 14:37:52.330453 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Getting to WaitForSSH function...
	I0729 14:37:52.330465 1039263 main.go:141] libmachine: (embed-certs-668123) Waiting for SSH to be available...
	I0729 14:37:52.332510 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.332794 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.332821 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.332897 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Using SSH client type: external
	I0729 14:37:52.332931 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa (-rw-------)
	I0729 14:37:52.332963 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:37:52.332976 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | About to run SSH command:
	I0729 14:37:52.332989 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | exit 0
	I0729 14:37:52.456152 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | SSH cmd err, output: <nil>: 
	I0729 14:37:52.456532 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetConfigRaw
	I0729 14:37:52.457156 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetIP
	I0729 14:37:52.459620 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.459946 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.459980 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.460200 1039263 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/config.json ...
	I0729 14:37:52.460384 1039263 machine.go:94] provisionDockerMachine start ...
	I0729 14:37:52.460404 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:52.460672 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:52.462798 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.463089 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.463119 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.463260 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:52.463428 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.463594 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.463703 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:52.463856 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:52.464071 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:52.464080 1039263 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:37:52.564925 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 14:37:52.564959 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetMachineName
	I0729 14:37:52.565214 1039263 buildroot.go:166] provisioning hostname "embed-certs-668123"
	I0729 14:37:52.565241 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetMachineName
	I0729 14:37:52.565472 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:52.568131 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.568450 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.568482 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.568615 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:52.568825 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.568975 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.569143 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:52.569335 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:52.569511 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:52.569522 1039263 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-668123 && echo "embed-certs-668123" | sudo tee /etc/hostname
	I0729 14:37:52.686424 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-668123
	
	I0729 14:37:52.686459 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:52.689074 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.689387 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.689422 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.689619 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:52.689825 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.689999 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.690164 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:52.690338 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:52.690511 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:52.690526 1039263 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-668123' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-668123/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-668123' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:37:52.801778 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:37:52.801812 1039263 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:37:52.801841 1039263 buildroot.go:174] setting up certificates
	I0729 14:37:52.801851 1039263 provision.go:84] configureAuth start
	I0729 14:37:52.801863 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetMachineName
	I0729 14:37:52.802133 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetIP
	I0729 14:37:52.804526 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.804877 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.804910 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.805053 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:52.807140 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.807369 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.807395 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.807505 1039263 provision.go:143] copyHostCerts
	I0729 14:37:52.807594 1039263 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:37:52.807608 1039263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:37:52.807698 1039263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:37:52.807840 1039263 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:37:52.807852 1039263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:37:52.807891 1039263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:37:52.807969 1039263 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:37:52.807979 1039263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:37:52.808011 1039263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:37:52.808084 1039263 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.embed-certs-668123 san=[127.0.0.1 192.168.50.53 embed-certs-668123 localhost minikube]
	I0729 14:37:53.007382 1039263 provision.go:177] copyRemoteCerts
	I0729 14:37:53.007459 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:37:53.007548 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.010097 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.010465 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.010488 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.010660 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.010864 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.011037 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.011193 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:37:53.092043 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 14:37:53.116737 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:37:53.139828 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:37:53.162813 1039263 provision.go:87] duration metric: took 360.943219ms to configureAuth
	I0729 14:37:53.162856 1039263 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:37:53.163051 1039263 config.go:182] Loaded profile config "embed-certs-668123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:37:53.163144 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.165757 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.166102 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.166130 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.166272 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.166465 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.166665 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.166817 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.166983 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:53.167154 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:53.167169 1039263 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:37:53.428217 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:37:53.428246 1039263 machine.go:97] duration metric: took 967.84942ms to provisionDockerMachine
	I0729 14:37:53.428258 1039263 start.go:293] postStartSetup for "embed-certs-668123" (driver="kvm2")
	I0729 14:37:53.428269 1039263 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:37:53.428298 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.428641 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:37:53.428669 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.431228 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.431593 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.431620 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.431797 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.431992 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.432159 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.432313 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:37:53.511226 1039263 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:37:53.515527 1039263 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:37:53.515555 1039263 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:37:53.515635 1039263 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:37:53.515724 1039263 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:37:53.515846 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:37:53.525606 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:37:53.548757 1039263 start.go:296] duration metric: took 120.484005ms for postStartSetup
	I0729 14:37:53.548798 1039263 fix.go:56] duration metric: took 19.371497305s for fixHost
	I0729 14:37:53.548827 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.551373 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.551697 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.551725 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.551866 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.552085 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.552226 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.552383 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.552574 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:53.552746 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:53.552756 1039263 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:37:53.653267 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263873.628230451
	
	I0729 14:37:53.653291 1039263 fix.go:216] guest clock: 1722263873.628230451
	I0729 14:37:53.653301 1039263 fix.go:229] Guest: 2024-07-29 14:37:53.628230451 +0000 UTC Remote: 2024-07-29 14:37:53.548802078 +0000 UTC m=+242.399919494 (delta=79.428373ms)
	I0729 14:37:53.653329 1039263 fix.go:200] guest clock delta is within tolerance: 79.428373ms
	I0729 14:37:53.653337 1039263 start.go:83] releasing machines lock for "embed-certs-668123", held for 19.476079428s
	I0729 14:37:53.653364 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.653673 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetIP
	I0729 14:37:53.656383 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.656805 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.656836 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.656958 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.657597 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.657831 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.657923 1039263 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:37:53.657981 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.658101 1039263 ssh_runner.go:195] Run: cat /version.json
	I0729 14:37:53.658129 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.660964 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.661044 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.661349 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.661374 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.661400 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.661446 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.661628 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.661711 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.661795 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.661918 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.662012 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.662092 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.662200 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:37:53.662234 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:37:53.764286 1039263 ssh_runner.go:195] Run: systemctl --version
	I0729 14:37:53.772936 1039263 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:37:53.922874 1039263 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:37:53.928953 1039263 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:37:53.929035 1039263 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:37:53.947388 1039263 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:37:53.947417 1039263 start.go:495] detecting cgroup driver to use...
	I0729 14:37:53.947496 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:37:53.964141 1039263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:37:53.985980 1039263 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:37:53.986042 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:37:54.009646 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:37:54.023449 1039263 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:37:54.139511 1039263 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:37:54.312559 1039263 docker.go:233] disabling docker service ...
	I0729 14:37:54.312655 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:37:54.327466 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:37:54.342225 1039263 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:37:54.485007 1039263 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:37:54.623987 1039263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:37:54.638100 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:37:54.658833 1039263 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 14:37:54.658911 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.670274 1039263 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:37:54.670366 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.681548 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.691626 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.701915 1039263 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:37:54.713399 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.723631 1039263 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.740625 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.751521 1039263 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:37:54.761895 1039263 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:37:54.761942 1039263 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:37:54.775663 1039263 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:37:54.785415 1039263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:37:54.933441 1039263 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:37:55.066449 1039263 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:37:55.066539 1039263 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:37:55.071614 1039263 start.go:563] Will wait 60s for crictl version
	I0729 14:37:55.071671 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:37:55.075727 1039263 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:37:55.117286 1039263 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:37:55.117395 1039263 ssh_runner.go:195] Run: crio --version
	I0729 14:37:55.145732 1039263 ssh_runner.go:195] Run: crio --version
	I0729 14:37:55.179714 1039263 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 14:37:55.181109 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetIP
	I0729 14:37:55.184274 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:55.184734 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:55.184761 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:55.185066 1039263 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 14:37:55.190374 1039263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:37:55.206768 1039263 kubeadm.go:883] updating cluster {Name:embed-certs-668123 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-668123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:37:55.207054 1039263 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 14:37:55.207130 1039263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:37:55.247814 1039263 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 14:37:55.247890 1039263 ssh_runner.go:195] Run: which lz4
	I0729 14:37:55.251992 1039263 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 14:37:55.256440 1039263 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 14:37:55.256468 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 14:37:53.678402 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Start
	I0729 14:37:53.678610 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Ensuring networks are active...
	I0729 14:37:53.679311 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Ensuring network default is active
	I0729 14:37:53.679767 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Ensuring network mk-default-k8s-diff-port-751306 is active
	I0729 14:37:53.680133 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Getting domain xml...
	I0729 14:37:53.680818 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Creating domain...
	I0729 14:37:54.024601 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting to get IP...
	I0729 14:37:54.025431 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.025838 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.025944 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:54.025837 1040438 retry.go:31] will retry after 280.254814ms: waiting for machine to come up
	I0729 14:37:54.307727 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.308260 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.308293 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:54.308220 1040438 retry.go:31] will retry after 384.348242ms: waiting for machine to come up
	I0729 14:37:54.693703 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.694304 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.694334 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:54.694251 1040438 retry.go:31] will retry after 417.76448ms: waiting for machine to come up
	I0729 14:37:55.113670 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:55.114243 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:55.114272 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:55.114191 1040438 retry.go:31] will retry after 589.741485ms: waiting for machine to come up
	I0729 14:37:55.706127 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:55.706613 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:55.706646 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:55.706569 1040438 retry.go:31] will retry after 471.427821ms: waiting for machine to come up
	I0729 14:37:56.179380 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:56.179867 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:56.179896 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:56.179814 1040438 retry.go:31] will retry after 624.275074ms: waiting for machine to come up
	I0729 14:37:56.805673 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:56.806042 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:56.806063 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:56.806018 1040438 retry.go:31] will retry after 1.027377333s: waiting for machine to come up
	I0729 14:37:56.743842 1039263 crio.go:462] duration metric: took 1.49188656s to copy over tarball
	I0729 14:37:56.743941 1039263 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 14:37:58.879573 1039263 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.135595087s)
	I0729 14:37:58.879619 1039263 crio.go:469] duration metric: took 2.135735155s to extract the tarball
	I0729 14:37:58.879628 1039263 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 14:37:58.916966 1039263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:37:58.958323 1039263 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 14:37:58.958349 1039263 cache_images.go:84] Images are preloaded, skipping loading
	I0729 14:37:58.958357 1039263 kubeadm.go:934] updating node { 192.168.50.53 8443 v1.30.3 crio true true} ...
	I0729 14:37:58.958537 1039263 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-668123 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-668123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:37:58.958632 1039263 ssh_runner.go:195] Run: crio config
	I0729 14:37:59.004120 1039263 cni.go:84] Creating CNI manager for ""
	I0729 14:37:59.004146 1039263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:37:59.004163 1039263 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:37:59.004192 1039263 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.53 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-668123 NodeName:embed-certs-668123 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 14:37:59.004371 1039263 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-668123"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:37:59.004469 1039263 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 14:37:59.014796 1039263 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:37:59.014866 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:37:59.024575 1039263 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0729 14:37:59.040707 1039263 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 14:37:59.056693 1039263 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0729 14:37:59.073320 1039263 ssh_runner.go:195] Run: grep 192.168.50.53	control-plane.minikube.internal$ /etc/hosts
	I0729 14:37:59.077226 1039263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:37:59.091283 1039263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:37:59.221532 1039263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:37:59.239319 1039263 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123 for IP: 192.168.50.53
	I0729 14:37:59.239362 1039263 certs.go:194] generating shared ca certs ...
	I0729 14:37:59.239387 1039263 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:37:59.239604 1039263 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:37:59.239654 1039263 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:37:59.239667 1039263 certs.go:256] generating profile certs ...
	I0729 14:37:59.239818 1039263 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/client.key
	I0729 14:37:59.239922 1039263 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/apiserver.key.544998fe
	I0729 14:37:59.239969 1039263 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/proxy-client.key
	I0729 14:37:59.240137 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:37:59.240188 1039263 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:37:59.240202 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:37:59.240238 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:37:59.240280 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:37:59.240313 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:37:59.240385 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:37:59.241551 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:37:59.278842 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:37:59.305668 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:37:59.332314 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:37:59.377867 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 14:37:59.405592 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 14:37:59.438073 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:37:59.462130 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 14:37:59.489158 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:37:59.511811 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:37:59.534728 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:37:59.558680 1039263 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:37:59.575404 1039263 ssh_runner.go:195] Run: openssl version
	I0729 14:37:59.581518 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:37:59.592024 1039263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:37:59.596913 1039263 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:37:59.596983 1039263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:37:59.602973 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:37:59.613891 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:37:59.624053 1039263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:37:59.628881 1039263 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:37:59.628922 1039263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:37:59.634672 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:37:59.645513 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:37:59.656385 1039263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:37:59.661141 1039263 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:37:59.661209 1039263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:37:59.667169 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:37:59.678240 1039263 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:37:59.683075 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:37:59.689013 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:37:59.694754 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:37:59.700865 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:37:59.706664 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:37:59.712457 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:37:59.718347 1039263 kubeadm.go:392] StartCluster: {Name:embed-certs-668123 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-668123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:37:59.718460 1039263 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:37:59.718505 1039263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:37:59.756046 1039263 cri.go:89] found id: ""
	I0729 14:37:59.756143 1039263 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:37:59.766198 1039263 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 14:37:59.766222 1039263 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 14:37:59.766278 1039263 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 14:37:59.775664 1039263 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:37:59.776877 1039263 kubeconfig.go:125] found "embed-certs-668123" server: "https://192.168.50.53:8443"
	I0729 14:37:59.778802 1039263 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 14:37:59.787805 1039263 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.53
	I0729 14:37:59.787840 1039263 kubeadm.go:1160] stopping kube-system containers ...
	I0729 14:37:59.787854 1039263 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 14:37:59.787908 1039263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:37:59.828927 1039263 cri.go:89] found id: ""
	I0729 14:37:59.829016 1039263 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 14:37:59.844889 1039263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:37:59.854233 1039263 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:37:59.854264 1039263 kubeadm.go:157] found existing configuration files:
	
	I0729 14:37:59.854334 1039263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:37:59.863123 1039263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:37:59.863183 1039263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:37:59.872154 1039263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:37:59.880819 1039263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:37:59.880881 1039263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:37:59.889714 1039263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:37:59.898278 1039263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:37:59.898330 1039263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:37:59.907358 1039263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:37:59.916352 1039263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:37:59.916430 1039263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:37:59.925239 1039263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:37:59.934353 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:00.045086 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:00.793783 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:01.009839 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:01.080217 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:01.153377 1039263 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:38:01.153496 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:37:57.835202 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:57.835636 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:57.835674 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:57.835572 1040438 retry.go:31] will retry after 987.763901ms: waiting for machine to come up
	I0729 14:37:58.824975 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:58.825428 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:58.825457 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:58.825348 1040438 retry.go:31] will retry after 1.189429393s: waiting for machine to come up
	I0729 14:38:00.016130 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:00.016569 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:38:00.016604 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:38:00.016509 1040438 retry.go:31] will retry after 1.424039091s: waiting for machine to come up
	I0729 14:38:01.443138 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:01.443511 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:38:01.443540 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:38:01.443470 1040438 retry.go:31] will retry after 2.531090823s: waiting for machine to come up
	I0729 14:38:01.653905 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:02.153772 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:02.653590 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:02.669429 1039263 api_server.go:72] duration metric: took 1.516051254s to wait for apiserver process to appear ...
	I0729 14:38:02.669467 1039263 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:38:02.669495 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:05.531413 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:38:05.531451 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:38:05.531467 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:05.602173 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:38:05.602205 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:38:05.670522 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:05.680835 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:05.680861 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:06.170512 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:06.176052 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:06.176084 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:06.669679 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:06.674813 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:06.674854 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:07.170539 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:07.174573 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 200:
	ok
	I0729 14:38:07.180250 1039263 api_server.go:141] control plane version: v1.30.3
	I0729 14:38:07.180275 1039263 api_server.go:131] duration metric: took 4.510799806s to wait for apiserver health ...
	I0729 14:38:07.180284 1039263 cni.go:84] Creating CNI manager for ""
	I0729 14:38:07.180290 1039263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:38:07.181866 1039263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:38:03.976004 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:03.976514 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:38:03.976544 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:38:03.976474 1040438 retry.go:31] will retry after 3.356304099s: waiting for machine to come up
	I0729 14:38:07.335600 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:07.336031 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:38:07.336086 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:38:07.335992 1040438 retry.go:31] will retry after 3.345416128s: waiting for machine to come up
	I0729 14:38:07.182966 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:38:07.193166 1039263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:38:07.212801 1039263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:38:07.221297 1039263 system_pods.go:59] 8 kube-system pods found
	I0729 14:38:07.221331 1039263 system_pods.go:61] "coredns-7db6d8ff4d-6dhzz" [c680e565-fe93-4072-8fe8-6fd440ae5675] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 14:38:07.221340 1039263 system_pods.go:61] "etcd-embed-certs-668123" [3244d6a8-3aa2-406a-86fe-9770f5b8541a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 14:38:07.221347 1039263 system_pods.go:61] "kube-apiserver-embed-certs-668123" [a00570e4-b496-4083-b280-4125643e475e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 14:38:07.221352 1039263 system_pods.go:61] "kube-controller-manager-embed-certs-668123" [cec685e1-4d5f-4210-a115-e3766c962f07] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 14:38:07.221364 1039263 system_pods.go:61] "kube-proxy-2v79q" [e43e850d-b94e-467c-bf0f-0eac3828f54f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 14:38:07.221370 1039263 system_pods.go:61] "kube-scheduler-embed-certs-668123" [4037d948-faed-49c9-b321-6a4be51b9ea9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 14:38:07.221379 1039263 system_pods.go:61] "metrics-server-569cc877fc-5msnp" [eb9cd6f7-caf5-4b18-b0d6-0f01add839ce] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:38:07.221384 1039263 system_pods.go:61] "storage-provisioner" [ecdab0df-406c-4f3c-b8fe-34a48b7f1e0a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 14:38:07.221390 1039263 system_pods.go:74] duration metric: took 8.574498ms to wait for pod list to return data ...
	I0729 14:38:07.221397 1039263 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:38:07.224197 1039263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:38:07.224220 1039263 node_conditions.go:123] node cpu capacity is 2
	I0729 14:38:07.224231 1039263 node_conditions.go:105] duration metric: took 2.829585ms to run NodePressure ...
	I0729 14:38:07.224246 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:07.520049 1039263 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 14:38:07.524228 1039263 kubeadm.go:739] kubelet initialised
	I0729 14:38:07.524251 1039263 kubeadm.go:740] duration metric: took 4.174563ms waiting for restarted kubelet to initialise ...
	I0729 14:38:07.524262 1039263 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:38:07.529174 1039263 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:07.533534 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.533554 1039263 pod_ready.go:81] duration metric: took 4.355926ms for pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:07.533562 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.533567 1039263 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:07.537529 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "etcd-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.537550 1039263 pod_ready.go:81] duration metric: took 3.975082ms for pod "etcd-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:07.537561 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "etcd-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.537567 1039263 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:07.542299 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.542325 1039263 pod_ready.go:81] duration metric: took 4.747863ms for pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:07.542371 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.542390 1039263 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:07.616688 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.616725 1039263 pod_ready.go:81] duration metric: took 74.323327ms for pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:07.616740 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.616750 1039263 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2v79q" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:08.016334 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "kube-proxy-2v79q" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.016360 1039263 pod_ready.go:81] duration metric: took 399.599984ms for pod "kube-proxy-2v79q" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:08.016369 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "kube-proxy-2v79q" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.016374 1039263 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:08.416536 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.416571 1039263 pod_ready.go:81] duration metric: took 400.189243ms for pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:08.416585 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.416594 1039263 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:08.817526 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.817561 1039263 pod_ready.go:81] duration metric: took 400.956263ms for pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:08.817572 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.817590 1039263 pod_ready.go:38] duration metric: took 1.293313082s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:38:08.817610 1039263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 14:38:08.829394 1039263 ops.go:34] apiserver oom_adj: -16
	I0729 14:38:08.829425 1039263 kubeadm.go:597] duration metric: took 9.06319609s to restartPrimaryControlPlane
	I0729 14:38:08.829436 1039263 kubeadm.go:394] duration metric: took 9.111098315s to StartCluster
	I0729 14:38:08.829457 1039263 settings.go:142] acquiring lock: {Name:mke61e73d7bb1a5bd9c2f4c9e9bba0a07b199ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:08.829548 1039263 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:38:08.831113 1039263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:08.831396 1039263 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:38:08.831441 1039263 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 14:38:08.831524 1039263 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-668123"
	I0729 14:38:08.831539 1039263 addons.go:69] Setting default-storageclass=true in profile "embed-certs-668123"
	I0729 14:38:08.831562 1039263 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-668123"
	W0729 14:38:08.831572 1039263 addons.go:243] addon storage-provisioner should already be in state true
	I0729 14:38:08.831561 1039263 addons.go:69] Setting metrics-server=true in profile "embed-certs-668123"
	I0729 14:38:08.831593 1039263 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-668123"
	I0729 14:38:08.831601 1039263 addons.go:234] Setting addon metrics-server=true in "embed-certs-668123"
	I0729 14:38:08.831609 1039263 host.go:66] Checking if "embed-certs-668123" exists ...
	W0729 14:38:08.831610 1039263 addons.go:243] addon metrics-server should already be in state true
	I0729 14:38:08.831632 1039263 config.go:182] Loaded profile config "embed-certs-668123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:38:08.831644 1039263 host.go:66] Checking if "embed-certs-668123" exists ...
	I0729 14:38:08.831916 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.831933 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.831918 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.831956 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.831945 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.831964 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.833223 1039263 out.go:177] * Verifying Kubernetes components...
	I0729 14:38:08.834403 1039263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:08.847231 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I0729 14:38:08.847362 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37467
	I0729 14:38:08.847398 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44737
	I0729 14:38:08.847797 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.847896 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.847904 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.848350 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.848371 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.848487 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.848507 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.848520 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.848540 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.848774 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.848854 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.848867 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.849010 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:38:08.849363 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.849363 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.849392 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.849416 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.851933 1039263 addons.go:234] Setting addon default-storageclass=true in "embed-certs-668123"
	W0729 14:38:08.851955 1039263 addons.go:243] addon default-storageclass should already be in state true
	I0729 14:38:08.851988 1039263 host.go:66] Checking if "embed-certs-668123" exists ...
	I0729 14:38:08.852284 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.852330 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.865255 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34389
	I0729 14:38:08.865707 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.865981 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36925
	I0729 14:38:08.866157 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.866183 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.866419 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.866531 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.866804 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:38:08.866895 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.866920 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.867272 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.867839 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.867885 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.868000 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46413
	I0729 14:38:08.868397 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.868741 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:38:08.868886 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.868903 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.869276 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.869501 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:38:08.870835 1039263 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 14:38:08.871289 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:38:08.872267 1039263 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 14:38:08.872289 1039263 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 14:38:08.872306 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:38:08.873165 1039263 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:08.874593 1039263 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:38:08.874616 1039263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 14:38:08.874635 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:38:08.875061 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.875572 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:38:08.875605 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.875815 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:38:08.876012 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:38:08.876208 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:38:08.876370 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:38:08.877997 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.878394 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:38:08.878423 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.878555 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:38:08.878726 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:38:08.878889 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:38:08.879002 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:38:08.890720 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I0729 14:38:08.891092 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.891619 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.891638 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.891972 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.892184 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:38:08.893577 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:38:08.893817 1039263 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 14:38:08.893840 1039263 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 14:38:08.893859 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:38:08.896843 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.897302 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:38:08.897320 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.897464 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:38:08.897618 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:38:08.897866 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:38:08.897966 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:38:09.019001 1039263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:38:09.038038 1039263 node_ready.go:35] waiting up to 6m0s for node "embed-certs-668123" to be "Ready" ...
	I0729 14:38:09.097896 1039263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:38:09.101844 1039263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 14:38:09.229339 1039263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 14:38:09.229360 1039263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 14:38:09.317591 1039263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 14:38:09.317625 1039263 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 14:38:09.370444 1039263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:38:09.370490 1039263 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 14:38:09.407869 1039263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:38:10.014739 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.014767 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.014873 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.014897 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.015112 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Closing plugin on server side
	I0729 14:38:10.015150 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.015157 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.015166 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.015174 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.015284 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.015297 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.015306 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.015313 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.015384 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.015413 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.015395 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Closing plugin on server side
	I0729 14:38:10.015611 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.015641 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.024010 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.024031 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.024299 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.024318 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.024343 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Closing plugin on server side
	I0729 14:38:10.233873 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.233903 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.234247 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Closing plugin on server side
	I0729 14:38:10.234260 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.234275 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.234292 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.234301 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.234546 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.234563 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.234574 1039263 addons.go:475] Verifying addon metrics-server=true in "embed-certs-668123"
	I0729 14:38:10.236215 1039263 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 14:38:10.237377 1039263 addons.go:510] duration metric: took 1.405942032s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 14:38:11.042263 1039263 node_ready.go:53] node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:12.129080 1039759 start.go:364] duration metric: took 3m18.14725367s to acquireMachinesLock for "old-k8s-version-360866"
	I0729 14:38:12.129155 1039759 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:38:12.129166 1039759 fix.go:54] fixHost starting: 
	I0729 14:38:12.129715 1039759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:12.129752 1039759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:12.146596 1039759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34517
	I0729 14:38:12.147101 1039759 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:12.147554 1039759 main.go:141] libmachine: Using API Version  1
	I0729 14:38:12.147581 1039759 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:12.147871 1039759 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:12.148094 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:12.148293 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetState
	I0729 14:38:12.149880 1039759 fix.go:112] recreateIfNeeded on old-k8s-version-360866: state=Stopped err=<nil>
	I0729 14:38:12.149918 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	W0729 14:38:12.150120 1039759 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:38:12.152003 1039759 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-360866" ...
	I0729 14:38:10.683699 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.684108 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Found IP for machine: 192.168.72.233
	I0729 14:38:10.684148 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has current primary IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.684161 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Reserving static IP address...
	I0729 14:38:10.684506 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-751306", mac: "52:54:00:9f:b9:23", ip: "192.168.72.233"} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.684540 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | skip adding static IP to network mk-default-k8s-diff-port-751306 - found existing host DHCP lease matching {name: "default-k8s-diff-port-751306", mac: "52:54:00:9f:b9:23", ip: "192.168.72.233"}
	I0729 14:38:10.684558 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Reserved static IP address: 192.168.72.233
	I0729 14:38:10.684581 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for SSH to be available...
	I0729 14:38:10.684600 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Getting to WaitForSSH function...
	I0729 14:38:10.686336 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.686684 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.686713 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.686825 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Using SSH client type: external
	I0729 14:38:10.686856 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa (-rw-------)
	I0729 14:38:10.686894 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.233 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:38:10.686904 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | About to run SSH command:
	I0729 14:38:10.686921 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | exit 0
	I0729 14:38:10.808536 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | SSH cmd err, output: <nil>: 
	I0729 14:38:10.808965 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetConfigRaw
	I0729 14:38:10.809613 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetIP
	I0729 14:38:10.812200 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.812590 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.812625 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.812862 1039440 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/config.json ...
	I0729 14:38:10.813089 1039440 machine.go:94] provisionDockerMachine start ...
	I0729 14:38:10.813110 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:10.813322 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:10.815607 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.815933 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.815962 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.816113 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:10.816287 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:10.816450 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:10.816623 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:10.816838 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:10.817167 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:10.817184 1039440 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:38:10.916864 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 14:38:10.916908 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetMachineName
	I0729 14:38:10.917215 1039440 buildroot.go:166] provisioning hostname "default-k8s-diff-port-751306"
	I0729 14:38:10.917249 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetMachineName
	I0729 14:38:10.917478 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:10.919961 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.920339 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.920363 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.920471 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:10.920660 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:10.920842 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:10.920991 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:10.921145 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:10.921358 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:10.921377 1039440 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-751306 && echo "default-k8s-diff-port-751306" | sudo tee /etc/hostname
	I0729 14:38:11.034826 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-751306
	
	I0729 14:38:11.034859 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.037494 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.037836 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.037870 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.038068 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:11.038274 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.038410 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.038575 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:11.038736 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:11.038971 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:11.038998 1039440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-751306' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-751306/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-751306' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:38:11.146350 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:38:11.146391 1039440 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:38:11.146449 1039440 buildroot.go:174] setting up certificates
	I0729 14:38:11.146463 1039440 provision.go:84] configureAuth start
	I0729 14:38:11.146478 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetMachineName
	I0729 14:38:11.146842 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetIP
	I0729 14:38:11.149492 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.149766 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.149796 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.149927 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.152449 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.152735 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.152785 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.152956 1039440 provision.go:143] copyHostCerts
	I0729 14:38:11.153010 1039440 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:38:11.153021 1039440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:38:11.153074 1039440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:38:11.153172 1039440 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:38:11.153180 1039440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:38:11.153198 1039440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:38:11.153253 1039440 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:38:11.153260 1039440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:38:11.153276 1039440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:38:11.153324 1039440 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-751306 san=[127.0.0.1 192.168.72.233 default-k8s-diff-port-751306 localhost minikube]
	I0729 14:38:11.489907 1039440 provision.go:177] copyRemoteCerts
	I0729 14:38:11.489990 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:38:11.490028 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.492487 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.492801 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.492832 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.492992 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:11.493220 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.493467 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:11.493611 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:38:11.574475 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:38:11.598182 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:38:11.622809 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 14:38:11.646533 1039440 provision.go:87] duration metric: took 500.054139ms to configureAuth
	I0729 14:38:11.646563 1039440 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:38:11.646742 1039440 config.go:182] Loaded profile config "default-k8s-diff-port-751306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:38:11.646822 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.649260 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.649581 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.649616 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.649729 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:11.649934 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.650088 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.650274 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:11.650436 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:11.650610 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:11.650628 1039440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:38:11.906877 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:38:11.906918 1039440 machine.go:97] duration metric: took 1.093811728s to provisionDockerMachine
	I0729 14:38:11.906936 1039440 start.go:293] postStartSetup for "default-k8s-diff-port-751306" (driver="kvm2")
	I0729 14:38:11.906951 1039440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:38:11.906977 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:11.907366 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:38:11.907407 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.910366 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.910725 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.910748 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.910913 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:11.911162 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.911323 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:11.911529 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:38:11.992133 1039440 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:38:11.996426 1039440 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:38:11.996456 1039440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:38:11.996544 1039440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:38:11.996641 1039440 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:38:11.996747 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:38:12.006165 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:12.029591 1039440 start.go:296] duration metric: took 122.613174ms for postStartSetup
	I0729 14:38:12.029643 1039440 fix.go:56] duration metric: took 18.376148792s for fixHost
	I0729 14:38:12.029670 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:12.032299 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.032667 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:12.032731 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.032901 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:12.033104 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:12.033260 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:12.033372 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:12.033510 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:12.033679 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:12.033688 1039440 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:38:12.128889 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263892.107886376
	
	I0729 14:38:12.128917 1039440 fix.go:216] guest clock: 1722263892.107886376
	I0729 14:38:12.128926 1039440 fix.go:229] Guest: 2024-07-29 14:38:12.107886376 +0000 UTC Remote: 2024-07-29 14:38:12.029648961 +0000 UTC m=+239.632909800 (delta=78.237415ms)
	I0729 14:38:12.128955 1039440 fix.go:200] guest clock delta is within tolerance: 78.237415ms
	I0729 14:38:12.128961 1039440 start.go:83] releasing machines lock for "default-k8s-diff-port-751306", held for 18.475504041s
	I0729 14:38:12.128995 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:12.129301 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetIP
	I0729 14:38:12.132025 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.132374 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:12.132401 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.132566 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:12.133087 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:12.133273 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:12.133349 1039440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:38:12.133404 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:12.133513 1039440 ssh_runner.go:195] Run: cat /version.json
	I0729 14:38:12.133534 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:12.136121 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.136149 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.136523 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:12.136577 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:12.136607 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.136624 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.136716 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:12.136793 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:12.136917 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:12.136973 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:12.137088 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:12.137165 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:12.137292 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:38:12.137232 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:38:12.233842 1039440 ssh_runner.go:195] Run: systemctl --version
	I0729 14:38:12.240082 1039440 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:38:12.388404 1039440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:38:12.395038 1039440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:38:12.395127 1039440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:38:12.416590 1039440 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:38:12.416618 1039440 start.go:495] detecting cgroup driver to use...
	I0729 14:38:12.416682 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:38:12.437863 1039440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:38:12.453458 1039440 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:38:12.453508 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:38:12.467657 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:38:12.482328 1039440 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:38:12.610786 1039440 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:38:12.774787 1039440 docker.go:233] disabling docker service ...
	I0729 14:38:12.774861 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:38:12.790091 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:38:12.803914 1039440 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:38:12.933894 1039440 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:38:13.052159 1039440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:38:13.069309 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:38:13.089959 1039440 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 14:38:13.090014 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.102668 1039440 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:38:13.102741 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.113634 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.124374 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.135488 1039440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:38:13.147171 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.159757 1039440 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.178620 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.189326 1039440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:38:13.200007 1039440 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:38:13.200067 1039440 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:38:13.213063 1039440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:38:13.226044 1039440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:13.360685 1039440 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:38:13.508473 1039440 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:38:13.508556 1039440 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:38:13.513547 1039440 start.go:563] Will wait 60s for crictl version
	I0729 14:38:13.513619 1039440 ssh_runner.go:195] Run: which crictl
	I0729 14:38:13.518528 1039440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:38:13.567103 1039440 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:38:13.567180 1039440 ssh_runner.go:195] Run: crio --version
	I0729 14:38:13.603837 1039440 ssh_runner.go:195] Run: crio --version
	I0729 14:38:13.633583 1039440 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 14:38:12.153214 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .Start
	I0729 14:38:12.153408 1039759 main.go:141] libmachine: (old-k8s-version-360866) Ensuring networks are active...
	I0729 14:38:12.154141 1039759 main.go:141] libmachine: (old-k8s-version-360866) Ensuring network default is active
	I0729 14:38:12.154590 1039759 main.go:141] libmachine: (old-k8s-version-360866) Ensuring network mk-old-k8s-version-360866 is active
	I0729 14:38:12.154970 1039759 main.go:141] libmachine: (old-k8s-version-360866) Getting domain xml...
	I0729 14:38:12.155733 1039759 main.go:141] libmachine: (old-k8s-version-360866) Creating domain...
	I0729 14:38:12.526504 1039759 main.go:141] libmachine: (old-k8s-version-360866) Waiting to get IP...
	I0729 14:38:12.527560 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:12.528068 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:12.528147 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:12.528048 1040622 retry.go:31] will retry after 240.079974ms: waiting for machine to come up
	I0729 14:38:12.769388 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:12.769881 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:12.769910 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:12.769829 1040622 retry.go:31] will retry after 271.200632ms: waiting for machine to come up
	I0729 14:38:13.042584 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:13.043069 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:13.043101 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:13.043049 1040622 retry.go:31] will retry after 464.725959ms: waiting for machine to come up
	I0729 14:38:13.509830 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:13.510400 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:13.510434 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:13.510350 1040622 retry.go:31] will retry after 416.316047ms: waiting for machine to come up
	I0729 14:38:13.042877 1039263 node_ready.go:53] node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:15.051282 1039263 node_ready.go:53] node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:13.635092 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetIP
	I0729 14:38:13.638202 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:13.638665 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:13.638691 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:13.638933 1039440 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 14:38:13.642960 1039440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:13.656098 1039440 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-751306 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-751306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.233 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:38:13.656208 1039440 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 14:38:13.656255 1039440 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:13.697398 1039440 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 14:38:13.697475 1039440 ssh_runner.go:195] Run: which lz4
	I0729 14:38:13.701632 1039440 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 14:38:13.707129 1039440 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 14:38:13.707162 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 14:38:15.218414 1039440 crio.go:462] duration metric: took 1.516807674s to copy over tarball
	I0729 14:38:15.218505 1039440 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 14:38:13.927885 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:13.928343 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:13.928373 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:13.928307 1040622 retry.go:31] will retry after 659.670364ms: waiting for machine to come up
	I0729 14:38:14.589644 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:14.590143 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:14.590172 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:14.590031 1040622 retry.go:31] will retry after 738.020335ms: waiting for machine to come up
	I0729 14:38:15.330093 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:15.330603 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:15.330633 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:15.330553 1040622 retry.go:31] will retry after 1.13067902s: waiting for machine to come up
	I0729 14:38:16.462554 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:16.463002 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:16.463031 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:16.462977 1040622 retry.go:31] will retry after 1.342785853s: waiting for machine to come up
	I0729 14:38:17.806889 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:17.807333 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:17.807365 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:17.807266 1040622 retry.go:31] will retry after 1.804812934s: waiting for machine to come up
	I0729 14:38:16.550848 1039263 node_ready.go:49] node "embed-certs-668123" has status "Ready":"True"
	I0729 14:38:16.550880 1039263 node_ready.go:38] duration metric: took 7.512808712s for node "embed-certs-668123" to be "Ready" ...
	I0729 14:38:16.550895 1039263 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:38:16.563220 1039263 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:16.570054 1039263 pod_ready.go:92] pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:16.570080 1039263 pod_ready.go:81] duration metric: took 6.832939ms for pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:16.570091 1039263 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:19.207981 1039263 pod_ready.go:102] pod "etcd-embed-certs-668123" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:17.498961 1039440 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.280415291s)
	I0729 14:38:17.498997 1039440 crio.go:469] duration metric: took 2.280548689s to extract the tarball
	I0729 14:38:17.499008 1039440 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 14:38:17.537972 1039440 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:17.583582 1039440 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 14:38:17.583609 1039440 cache_images.go:84] Images are preloaded, skipping loading
	I0729 14:38:17.583617 1039440 kubeadm.go:934] updating node { 192.168.72.233 8444 v1.30.3 crio true true} ...
	I0729 14:38:17.583719 1039440 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-751306 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-751306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:38:17.583789 1039440 ssh_runner.go:195] Run: crio config
	I0729 14:38:17.637202 1039440 cni.go:84] Creating CNI manager for ""
	I0729 14:38:17.637230 1039440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:38:17.637243 1039440 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:38:17.637272 1039440 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.233 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-751306 NodeName:default-k8s-diff-port-751306 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.233 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 14:38:17.637451 1039440 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.233
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-751306"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.233
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.233"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:38:17.637528 1039440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 14:38:17.650173 1039440 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:38:17.650259 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:38:17.661790 1039440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0729 14:38:17.680720 1039440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 14:38:17.700420 1039440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0729 14:38:17.723134 1039440 ssh_runner.go:195] Run: grep 192.168.72.233	control-plane.minikube.internal$ /etc/hosts
	I0729 14:38:17.727510 1039440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.233	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:17.741033 1039440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:17.889833 1039440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:38:17.910486 1039440 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306 for IP: 192.168.72.233
	I0729 14:38:17.910540 1039440 certs.go:194] generating shared ca certs ...
	I0729 14:38:17.910565 1039440 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:17.910763 1039440 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:38:17.910821 1039440 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:38:17.910833 1039440 certs.go:256] generating profile certs ...
	I0729 14:38:17.910941 1039440 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/client.key
	I0729 14:38:17.911003 1039440 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/apiserver.key.811a3f6d
	I0729 14:38:17.911105 1039440 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/proxy-client.key
	I0729 14:38:17.911271 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:38:17.911315 1039440 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:38:17.911329 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:38:17.911362 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:38:17.911393 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:38:17.911426 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:38:17.911478 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:17.912301 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:38:17.948102 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:38:17.984122 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:38:18.019932 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:38:18.062310 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 14:38:18.093176 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 14:38:18.124016 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:38:18.151933 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 14:38:18.179714 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:38:18.203414 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:38:18.233286 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:38:18.262871 1039440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:38:18.283064 1039440 ssh_runner.go:195] Run: openssl version
	I0729 14:38:18.289016 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:38:18.299409 1039440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:38:18.304053 1039440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:38:18.304115 1039440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:38:18.309976 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:38:18.321472 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:38:18.331916 1039440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:38:18.336822 1039440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:38:18.336881 1039440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:38:18.342762 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:38:18.353478 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:38:18.364299 1039440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:18.369024 1039440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:18.369076 1039440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:18.376534 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:38:18.387360 1039440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:38:18.392392 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:38:18.398520 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:38:18.404397 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:38:18.410922 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:38:18.417193 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:38:18.423808 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:38:18.433345 1039440 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-751306 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-751306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.233 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:38:18.433463 1039440 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:38:18.433582 1039440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:38:18.476749 1039440 cri.go:89] found id: ""
	I0729 14:38:18.476834 1039440 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:38:18.488548 1039440 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 14:38:18.488570 1039440 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 14:38:18.488628 1039440 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 14:38:18.499081 1039440 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:38:18.500064 1039440 kubeconfig.go:125] found "default-k8s-diff-port-751306" server: "https://192.168.72.233:8444"
	I0729 14:38:18.502130 1039440 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 14:38:18.511589 1039440 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.233
	I0729 14:38:18.511631 1039440 kubeadm.go:1160] stopping kube-system containers ...
	I0729 14:38:18.511646 1039440 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 14:38:18.511698 1039440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:38:18.559691 1039440 cri.go:89] found id: ""
	I0729 14:38:18.559779 1039440 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 14:38:18.576217 1039440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:38:18.586031 1039440 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:38:18.586057 1039440 kubeadm.go:157] found existing configuration files:
	
	I0729 14:38:18.586110 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 14:38:18.595032 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:38:18.595096 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:38:18.604320 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 14:38:18.613996 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:38:18.614053 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:38:18.623345 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 14:38:18.631898 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:38:18.631943 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:38:18.641303 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 14:38:18.649849 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:38:18.649907 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:38:18.659657 1039440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:38:18.668914 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:18.782351 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:19.902413 1039440 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.120025721s)
	I0729 14:38:19.902451 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:20.120455 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:20.206099 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:20.293738 1039440 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:38:20.293850 1039440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:20.794840 1039440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:21.294958 1039440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:21.313567 1039440 api_server.go:72] duration metric: took 1.019826572s to wait for apiserver process to appear ...
	I0729 14:38:21.313600 1039440 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:38:21.313625 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:21.314152 1039440 api_server.go:269] stopped: https://192.168.72.233:8444/healthz: Get "https://192.168.72.233:8444/healthz": dial tcp 192.168.72.233:8444: connect: connection refused
	I0729 14:38:21.813935 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:19.613474 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:19.613801 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:19.613830 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:19.613749 1040622 retry.go:31] will retry after 1.449593132s: waiting for machine to come up
	I0729 14:38:21.064774 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:21.065382 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:21.065405 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:21.065314 1040622 retry.go:31] will retry after 1.807508073s: waiting for machine to come up
	I0729 14:38:22.874485 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:22.874896 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:22.874925 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:22.874844 1040622 retry.go:31] will retry after 3.036719557s: waiting for machine to come up
	I0729 14:38:21.578125 1039263 pod_ready.go:92] pod "etcd-embed-certs-668123" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.578152 1039263 pod_ready.go:81] duration metric: took 5.008051755s for pod "etcd-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.578164 1039263 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.584521 1039263 pod_ready.go:92] pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.584544 1039263 pod_ready.go:81] duration metric: took 6.372252ms for pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.584558 1039263 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.590245 1039263 pod_ready.go:92] pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.590269 1039263 pod_ready.go:81] duration metric: took 5.702853ms for pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.590280 1039263 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2v79q" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.594576 1039263 pod_ready.go:92] pod "kube-proxy-2v79q" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.594602 1039263 pod_ready.go:81] duration metric: took 4.314692ms for pod "kube-proxy-2v79q" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.594614 1039263 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.787339 1039263 pod_ready.go:92] pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.787379 1039263 pod_ready.go:81] duration metric: took 192.756548ms for pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.787399 1039263 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:23.795588 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:24.561135 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:38:24.561176 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:38:24.561195 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:24.635519 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:24.635550 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:24.813755 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:24.817972 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:24.818003 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:25.314643 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:25.320059 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:25.320094 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:25.814758 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:25.820578 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:25.820613 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:26.314798 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:26.319346 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:26.319384 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:26.813907 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:26.821176 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:26.821208 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:27.314614 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:27.319335 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:27.319361 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:27.814188 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:27.819010 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 200:
	ok
	I0729 14:38:27.826057 1039440 api_server.go:141] control plane version: v1.30.3
	I0729 14:38:27.826082 1039440 api_server.go:131] duration metric: took 6.512474877s to wait for apiserver health ...
	I0729 14:38:27.826091 1039440 cni.go:84] Creating CNI manager for ""
	I0729 14:38:27.826098 1039440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:38:27.827698 1039440 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:38:25.913642 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:25.914139 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:25.914166 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:25.914099 1040622 retry.go:31] will retry after 3.839238383s: waiting for machine to come up
	I0729 14:38:26.293618 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:28.294115 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:30.296010 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:31.361688 1038758 start.go:364] duration metric: took 52.182622805s to acquireMachinesLock for "no-preload-603534"
	I0729 14:38:31.361756 1038758 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:38:31.361765 1038758 fix.go:54] fixHost starting: 
	I0729 14:38:31.362279 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:31.362319 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:31.380259 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34959
	I0729 14:38:31.380783 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:31.381320 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:38:31.381349 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:31.381649 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:31.381848 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:31.381989 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:38:31.383537 1038758 fix.go:112] recreateIfNeeded on no-preload-603534: state=Stopped err=<nil>
	I0729 14:38:31.383561 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	W0729 14:38:31.383739 1038758 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:38:31.385496 1038758 out.go:177] * Restarting existing kvm2 VM for "no-preload-603534" ...
	I0729 14:38:31.386878 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Start
	I0729 14:38:31.387071 1038758 main.go:141] libmachine: (no-preload-603534) Ensuring networks are active...
	I0729 14:38:31.387821 1038758 main.go:141] libmachine: (no-preload-603534) Ensuring network default is active
	I0729 14:38:31.388141 1038758 main.go:141] libmachine: (no-preload-603534) Ensuring network mk-no-preload-603534 is active
	I0729 14:38:31.388649 1038758 main.go:141] libmachine: (no-preload-603534) Getting domain xml...
	I0729 14:38:31.391807 1038758 main.go:141] libmachine: (no-preload-603534) Creating domain...
	I0729 14:38:27.829109 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:38:27.839810 1039440 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:38:27.858724 1039440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:38:27.868075 1039440 system_pods.go:59] 8 kube-system pods found
	I0729 14:38:27.868112 1039440 system_pods.go:61] "coredns-7db6d8ff4d-m6dlw" [7ce45b48-f04d-4167-8a6e-643b2fb3c4f0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 14:38:27.868121 1039440 system_pods.go:61] "etcd-default-k8s-diff-port-751306" [7ccadfd7-8b68-45c0-9670-af97b90d35d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 14:38:27.868128 1039440 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-751306" [5e8c8e17-28db-499c-a940-e67d92b28bfd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 14:38:27.868134 1039440 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-751306" [a2d31d58-d8d9-4070-96af-0d1af763d0b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 14:38:27.868140 1039440 system_pods.go:61] "kube-proxy-p6dv5" [c44edf0a-f608-49f2-ab53-7ffbcdf13b5e] Running
	I0729 14:38:27.868146 1039440 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-751306" [b87ee044-f43f-4aa7-94b3-4f2ad2213ce9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 14:38:27.868152 1039440 system_pods.go:61] "metrics-server-569cc877fc-gmz64" [296e883c-7394-4004-a25f-e93b4be52d46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:38:27.868156 1039440 system_pods.go:61] "storage-provisioner" [ec3b78f1-96a3-47b2-958d-82258a074634] Running
	I0729 14:38:27.868165 1039440 system_pods.go:74] duration metric: took 9.405484ms to wait for pod list to return data ...
	I0729 14:38:27.868173 1039440 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:38:27.871538 1039440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:38:27.871563 1039440 node_conditions.go:123] node cpu capacity is 2
	I0729 14:38:27.871575 1039440 node_conditions.go:105] duration metric: took 3.397306ms to run NodePressure ...
	I0729 14:38:27.871596 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:28.143890 1039440 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 14:38:28.148855 1039440 kubeadm.go:739] kubelet initialised
	I0729 14:38:28.148880 1039440 kubeadm.go:740] duration metric: took 4.952057ms waiting for restarted kubelet to initialise ...
	I0729 14:38:28.148891 1039440 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:38:28.154636 1039440 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-m6dlw" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:30.161265 1039440 pod_ready.go:102] pod "coredns-7db6d8ff4d-m6dlw" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:31.161979 1039440 pod_ready.go:92] pod "coredns-7db6d8ff4d-m6dlw" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:31.162005 1039440 pod_ready.go:81] duration metric: took 3.007344998s for pod "coredns-7db6d8ff4d-m6dlw" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:31.162015 1039440 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:29.755060 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.755512 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has current primary IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.755524 1039759 main.go:141] libmachine: (old-k8s-version-360866) Found IP for machine: 192.168.39.71
	I0729 14:38:29.755536 1039759 main.go:141] libmachine: (old-k8s-version-360866) Reserving static IP address...
	I0729 14:38:29.755975 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "old-k8s-version-360866", mac: "52:54:00:18:de:25", ip: "192.168.39.71"} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.756008 1039759 main.go:141] libmachine: (old-k8s-version-360866) Reserved static IP address: 192.168.39.71
	I0729 14:38:29.756035 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | skip adding static IP to network mk-old-k8s-version-360866 - found existing host DHCP lease matching {name: "old-k8s-version-360866", mac: "52:54:00:18:de:25", ip: "192.168.39.71"}
	I0729 14:38:29.756048 1039759 main.go:141] libmachine: (old-k8s-version-360866) Waiting for SSH to be available...
	I0729 14:38:29.756067 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | Getting to WaitForSSH function...
	I0729 14:38:29.758527 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.758899 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.758944 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.759003 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | Using SSH client type: external
	I0729 14:38:29.759024 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa (-rw-------)
	I0729 14:38:29.759058 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.71 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:38:29.759070 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | About to run SSH command:
	I0729 14:38:29.759083 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | exit 0
	I0729 14:38:29.884425 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | SSH cmd err, output: <nil>: 
	I0729 14:38:29.884833 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetConfigRaw
	I0729 14:38:29.885450 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:29.887929 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.888241 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.888294 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.888624 1039759 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/config.json ...
	I0729 14:38:29.888895 1039759 machine.go:94] provisionDockerMachine start ...
	I0729 14:38:29.888919 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:29.889221 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:29.891654 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.892013 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.892038 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.892163 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:29.892350 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.892598 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.892764 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:29.892968 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:29.893158 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:29.893169 1039759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:38:29.993529 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 14:38:29.993564 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetMachineName
	I0729 14:38:29.993859 1039759 buildroot.go:166] provisioning hostname "old-k8s-version-360866"
	I0729 14:38:29.993893 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetMachineName
	I0729 14:38:29.994074 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:29.996882 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.997279 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.997308 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.997537 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:29.997699 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.997856 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.997976 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:29.998206 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:29.998412 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:29.998429 1039759 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-360866 && echo "old-k8s-version-360866" | sudo tee /etc/hostname
	I0729 14:38:30.115298 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-360866
	
	I0729 14:38:30.115331 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.118349 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.118763 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.118793 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.119029 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:30.119203 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.119356 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.119561 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:30.119772 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:30.119976 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:30.120019 1039759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-360866' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-360866/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-360866' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:38:30.229987 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:38:30.230017 1039759 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:38:30.230059 1039759 buildroot.go:174] setting up certificates
	I0729 14:38:30.230070 1039759 provision.go:84] configureAuth start
	I0729 14:38:30.230090 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetMachineName
	I0729 14:38:30.230436 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:30.233150 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.233501 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.233533 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.233719 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.236157 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.236494 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.236534 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.236713 1039759 provision.go:143] copyHostCerts
	I0729 14:38:30.236786 1039759 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:38:30.236797 1039759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:38:30.236856 1039759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:38:30.236976 1039759 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:38:30.236986 1039759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:38:30.237006 1039759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:38:30.237071 1039759 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:38:30.237078 1039759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:38:30.237095 1039759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:38:30.237153 1039759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-360866 san=[127.0.0.1 192.168.39.71 localhost minikube old-k8s-version-360866]
	I0729 14:38:30.680859 1039759 provision.go:177] copyRemoteCerts
	I0729 14:38:30.680933 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:38:30.680970 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.683890 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.684229 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.684262 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.684430 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:30.684634 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.684822 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:30.684973 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:30.770659 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:38:30.799011 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 14:38:30.825536 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:38:30.850751 1039759 provision.go:87] duration metric: took 620.664228ms to configureAuth
	I0729 14:38:30.850795 1039759 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:38:30.850998 1039759 config.go:182] Loaded profile config "old-k8s-version-360866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 14:38:30.851072 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.853735 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.854065 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.854102 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.854197 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:30.854408 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.854559 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.854717 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:30.854961 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:30.855169 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:30.855187 1039759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:38:31.119354 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:38:31.119386 1039759 machine.go:97] duration metric: took 1.230472142s to provisionDockerMachine
	I0729 14:38:31.119401 1039759 start.go:293] postStartSetup for "old-k8s-version-360866" (driver="kvm2")
	I0729 14:38:31.119415 1039759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:38:31.119456 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.119885 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:38:31.119926 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.123196 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.123576 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.123607 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.123826 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.124053 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.124276 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.124469 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:31.208607 1039759 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:38:31.213173 1039759 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:38:31.213206 1039759 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:38:31.213268 1039759 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:38:31.213352 1039759 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:38:31.213454 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:38:31.225256 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:31.253156 1039759 start.go:296] duration metric: took 133.735669ms for postStartSetup
	I0729 14:38:31.253208 1039759 fix.go:56] duration metric: took 19.124042428s for fixHost
	I0729 14:38:31.253237 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.256005 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.256340 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.256375 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.256535 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.256732 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.256927 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.257075 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.257272 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:31.257445 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:31.257455 1039759 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:38:31.361488 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263911.340365932
	
	I0729 14:38:31.361512 1039759 fix.go:216] guest clock: 1722263911.340365932
	I0729 14:38:31.361519 1039759 fix.go:229] Guest: 2024-07-29 14:38:31.340365932 +0000 UTC Remote: 2024-07-29 14:38:31.253213714 +0000 UTC m=+217.413183116 (delta=87.152218ms)
	I0729 14:38:31.361572 1039759 fix.go:200] guest clock delta is within tolerance: 87.152218ms
	I0729 14:38:31.361583 1039759 start.go:83] releasing machines lock for "old-k8s-version-360866", held for 19.232453759s
	I0729 14:38:31.361611 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.361921 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:31.364981 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.365412 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.365441 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.365648 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.366227 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.366482 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.366583 1039759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:38:31.366644 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.366761 1039759 ssh_runner.go:195] Run: cat /version.json
	I0729 14:38:31.366797 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.369658 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.369699 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.370051 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.370081 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.370105 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.370125 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.370309 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.370325 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.370567 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.370568 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.370773 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.370809 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.370958 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:31.370957 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:31.472108 1039759 ssh_runner.go:195] Run: systemctl --version
	I0729 14:38:31.478939 1039759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:38:31.630720 1039759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:38:31.637768 1039759 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:38:31.637874 1039759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:38:31.655476 1039759 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:38:31.655504 1039759 start.go:495] detecting cgroup driver to use...
	I0729 14:38:31.655584 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:38:31.679387 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:38:31.704260 1039759 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:38:31.704318 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:38:31.727875 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:38:31.743197 1039759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:38:31.867502 1039759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:38:32.035088 1039759 docker.go:233] disabling docker service ...
	I0729 14:38:32.035169 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:38:32.050118 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:38:32.064828 1039759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:38:32.202938 1039759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:38:32.333330 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:38:32.348845 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:38:32.369848 1039759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 14:38:32.369923 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.381787 1039759 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:38:32.381893 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.394331 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.405323 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.417259 1039759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:38:32.428997 1039759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:38:32.440934 1039759 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:38:32.441003 1039759 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:38:32.454949 1039759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:38:32.466042 1039759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:32.596308 1039759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:38:32.762548 1039759 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:38:32.762632 1039759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:38:32.768336 1039759 start.go:563] Will wait 60s for crictl version
	I0729 14:38:32.768447 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:32.772850 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:38:32.829827 1039759 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:38:32.829936 1039759 ssh_runner.go:195] Run: crio --version
	I0729 14:38:32.863269 1039759 ssh_runner.go:195] Run: crio --version
	I0729 14:38:32.897768 1039759 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 14:38:32.899209 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:32.902257 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:32.902649 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:32.902680 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:32.902928 1039759 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 14:38:32.908590 1039759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:32.921952 1039759 kubeadm.go:883] updating cluster {Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:38:32.922094 1039759 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 14:38:32.922141 1039759 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:32.969932 1039759 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 14:38:32.970003 1039759 ssh_runner.go:195] Run: which lz4
	I0729 14:38:32.974564 1039759 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 14:38:32.980128 1039759 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 14:38:32.980173 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 14:38:32.795590 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:35.295541 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:31.750580 1038758 main.go:141] libmachine: (no-preload-603534) Waiting to get IP...
	I0729 14:38:31.751732 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:31.752236 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:31.752340 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:31.752236 1040763 retry.go:31] will retry after 239.008836ms: waiting for machine to come up
	I0729 14:38:31.993011 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:31.993538 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:31.993569 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:31.993481 1040763 retry.go:31] will retry after 288.863538ms: waiting for machine to come up
	I0729 14:38:32.284306 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:32.284941 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:32.284980 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:32.284867 1040763 retry.go:31] will retry after 410.903425ms: waiting for machine to come up
	I0729 14:38:32.697686 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:32.698291 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:32.698322 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:32.698227 1040763 retry.go:31] will retry after 423.090324ms: waiting for machine to come up
	I0729 14:38:33.122914 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:33.123550 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:33.123579 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:33.123500 1040763 retry.go:31] will retry after 744.030348ms: waiting for machine to come up
	I0729 14:38:33.869849 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:33.870499 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:33.870523 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:33.870456 1040763 retry.go:31] will retry after 888.516658ms: waiting for machine to come up
	I0729 14:38:34.760145 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:34.760594 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:34.760627 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:34.760534 1040763 retry.go:31] will retry after 889.371631ms: waiting for machine to come up
	I0729 14:38:35.651169 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:35.651700 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:35.651731 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:35.651636 1040763 retry.go:31] will retry after 1.200333492s: waiting for machine to come up
	I0729 14:38:33.181695 1039440 pod_ready.go:102] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:35.672201 1039440 pod_ready.go:102] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:34.707140 1039759 crio.go:462] duration metric: took 1.732619622s to copy over tarball
	I0729 14:38:34.707232 1039759 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 14:38:37.740076 1039759 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.032804006s)
	I0729 14:38:37.740105 1039759 crio.go:469] duration metric: took 3.032930405s to extract the tarball
	I0729 14:38:37.740113 1039759 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 14:38:37.786934 1039759 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:37.827451 1039759 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 14:38:37.827484 1039759 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 14:38:37.827576 1039759 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:37.827606 1039759 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:37.827624 1039759 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 14:38:37.827678 1039759 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:37.827702 1039759 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:37.827607 1039759 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:37.827683 1039759 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 14:38:37.827678 1039759 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:37.829621 1039759 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:37.829709 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:37.829714 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:37.829714 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:37.829724 1039759 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 14:38:37.829628 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:37.829808 1039759 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 14:38:37.829625 1039759 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:38.113249 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:38.373433 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:38.378382 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:38.380909 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:38.382431 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 14:38:38.391678 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:38.392565 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:38.419739 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 14:38:38.491174 1039759 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 14:38:38.491255 1039759 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:38.491320 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.570681 1039759 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 14:38:38.570784 1039759 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 14:38:38.570832 1039759 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 14:38:38.570889 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.570792 1039759 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:38.570721 1039759 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 14:38:38.570966 1039759 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:38.570977 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.570992 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.576687 1039759 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 14:38:38.576728 1039759 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:38.576769 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.587650 1039759 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 14:38:38.587699 1039759 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:38.587701 1039759 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 14:38:38.587738 1039759 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 14:38:38.587753 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.587791 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.587866 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:38.587883 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 14:38:38.587913 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:38.587948 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:38.591209 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:38.599567 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:38.610869 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 14:38:38.742939 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 14:38:38.742974 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 14:38:38.743091 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 14:38:38.743098 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 14:38:38.745789 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 14:38:38.745857 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 14:38:38.753643 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 14:38:38.753704 1039759 cache_images.go:92] duration metric: took 926.203812ms to LoadCachedImages
	W0729 14:38:38.753790 1039759 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0729 14:38:38.753804 1039759 kubeadm.go:934] updating node { 192.168.39.71 8443 v1.20.0 crio true true} ...
	I0729 14:38:38.753931 1039759 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-360866 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:38:38.753992 1039759 ssh_runner.go:195] Run: crio config
	I0729 14:38:38.802220 1039759 cni.go:84] Creating CNI manager for ""
	I0729 14:38:38.802246 1039759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:38:38.802258 1039759 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:38:38.802285 1039759 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.71 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-360866 NodeName:old-k8s-version-360866 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.71"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.71 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 14:38:38.802487 1039759 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.71
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-360866"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.71
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.71"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:38:38.802591 1039759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 14:38:38.816832 1039759 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:38:38.816934 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:38:38.827468 1039759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 14:38:38.847125 1039759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 14:38:38.865302 1039759 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 14:38:37.795799 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:40.294979 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:36.853388 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:36.853944 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:36.853979 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:36.853881 1040763 retry.go:31] will retry after 1.750535475s: waiting for machine to come up
	I0729 14:38:38.605644 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:38.606135 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:38.606185 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:38.606079 1040763 retry.go:31] will retry after 2.245294623s: waiting for machine to come up
	I0729 14:38:40.853761 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:40.854277 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:40.854311 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:40.854214 1040763 retry.go:31] will retry after 1.864975071s: waiting for machine to come up
	I0729 14:38:38.299326 1039440 pod_ready.go:102] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:39.170692 1039440 pod_ready.go:92] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:39.170720 1039440 pod_ready.go:81] duration metric: took 8.008696752s for pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:39.170735 1039440 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:39.177419 1039440 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:39.177449 1039440 pod_ready.go:81] duration metric: took 6.705958ms for pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:39.177463 1039440 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.185538 1039440 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:41.185566 1039440 pod_ready.go:81] duration metric: took 2.008093791s for pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.185580 1039440 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-p6dv5" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.193833 1039440 pod_ready.go:92] pod "kube-proxy-p6dv5" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:41.193864 1039440 pod_ready.go:81] duration metric: took 8.275486ms for pod "kube-proxy-p6dv5" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.193878 1039440 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.200931 1039440 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:41.200963 1039440 pod_ready.go:81] duration metric: took 7.075212ms for pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.200978 1039440 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:38.884267 1039759 ssh_runner.go:195] Run: grep 192.168.39.71	control-plane.minikube.internal$ /etc/hosts
	I0729 14:38:38.889206 1039759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.71	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:38.905643 1039759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:39.032065 1039759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:38:39.051892 1039759 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866 for IP: 192.168.39.71
	I0729 14:38:39.051991 1039759 certs.go:194] generating shared ca certs ...
	I0729 14:38:39.052019 1039759 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:39.052203 1039759 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:38:39.052258 1039759 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:38:39.052270 1039759 certs.go:256] generating profile certs ...
	I0729 14:38:39.091359 1039759 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/client.key
	I0729 14:38:39.091485 1039759 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.key.98c2aed0
	I0729 14:38:39.091554 1039759 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.key
	I0729 14:38:39.091718 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:38:39.091763 1039759 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:38:39.091776 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:38:39.091804 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:38:39.091835 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:38:39.091867 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:38:39.091924 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:39.092850 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:38:39.125528 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:38:39.153093 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:38:39.181324 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:38:39.235516 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 14:38:39.262599 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 14:38:39.293085 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:38:39.326318 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 14:38:39.361548 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:38:39.386876 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:38:39.412529 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:38:39.438418 1039759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:38:39.459519 1039759 ssh_runner.go:195] Run: openssl version
	I0729 14:38:39.466109 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:38:39.477941 1039759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:39.482748 1039759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:39.482820 1039759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:39.489099 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:38:39.500207 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:38:39.511513 1039759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:38:39.516125 1039759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:38:39.516183 1039759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:38:39.522297 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:38:39.533536 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:38:39.544996 1039759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:38:39.549681 1039759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:38:39.549733 1039759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:38:39.556332 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:38:39.571393 1039759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:38:39.578420 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:38:39.586316 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:38:39.593450 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:38:39.600604 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:38:39.607483 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:38:39.614692 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:38:39.621776 1039759 kubeadm.go:392] StartCluster: {Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:38:39.621893 1039759 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:38:39.621955 1039759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:38:39.673544 1039759 cri.go:89] found id: ""
	I0729 14:38:39.673634 1039759 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:38:39.687887 1039759 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 14:38:39.687912 1039759 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 14:38:39.687963 1039759 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 14:38:39.701616 1039759 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:38:39.702914 1039759 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-360866" does not appear in /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:38:39.703576 1039759 kubeconfig.go:62] /home/jenkins/minikube-integration/19338-974764/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-360866" cluster setting kubeconfig missing "old-k8s-version-360866" context setting]
	I0729 14:38:39.704951 1039759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:39.715056 1039759 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 14:38:39.728384 1039759 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.71
	I0729 14:38:39.728448 1039759 kubeadm.go:1160] stopping kube-system containers ...
	I0729 14:38:39.728466 1039759 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 14:38:39.728534 1039759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:38:39.778476 1039759 cri.go:89] found id: ""
	I0729 14:38:39.778561 1039759 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 14:38:39.800712 1039759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:38:39.813243 1039759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:38:39.813265 1039759 kubeadm.go:157] found existing configuration files:
	
	I0729 14:38:39.813323 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:38:39.824822 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:38:39.824897 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:38:39.834966 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:38:39.847660 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:38:39.847887 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:38:39.861117 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:38:39.873868 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:38:39.873936 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:38:39.884195 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:38:39.895155 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:38:39.895234 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:38:39.909138 1039759 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:38:39.920721 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:40.055932 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.173909 1039759 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.117933178s)
	I0729 14:38:41.173947 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.419684 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.550852 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.655941 1039759 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:38:41.656040 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:42.156080 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:42.656948 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:43.157127 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:43.656087 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:42.794217 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:45.293634 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:42.720182 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:42.720674 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:42.720701 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:42.720614 1040763 retry.go:31] will retry after 2.929394717s: waiting for machine to come up
	I0729 14:38:45.653508 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:45.654044 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:45.654069 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:45.653993 1040763 retry.go:31] will retry after 4.133064498s: waiting for machine to come up
	I0729 14:38:43.208287 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:45.706607 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:44.156583 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:44.657199 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:45.156268 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:45.656786 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:46.156393 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:46.656151 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:47.156507 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:47.656922 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:48.156840 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:48.656756 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:47.294322 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:49.795189 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:49.789721 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.790248 1038758 main.go:141] libmachine: (no-preload-603534) Found IP for machine: 192.168.61.116
	I0729 14:38:49.790272 1038758 main.go:141] libmachine: (no-preload-603534) Reserving static IP address...
	I0729 14:38:49.790290 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has current primary IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.790823 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "no-preload-603534", mac: "52:54:00:bf:94:45", ip: "192.168.61.116"} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:49.790860 1038758 main.go:141] libmachine: (no-preload-603534) Reserved static IP address: 192.168.61.116
	I0729 14:38:49.790883 1038758 main.go:141] libmachine: (no-preload-603534) DBG | skip adding static IP to network mk-no-preload-603534 - found existing host DHCP lease matching {name: "no-preload-603534", mac: "52:54:00:bf:94:45", ip: "192.168.61.116"}
	I0729 14:38:49.790920 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Getting to WaitForSSH function...
	I0729 14:38:49.790937 1038758 main.go:141] libmachine: (no-preload-603534) Waiting for SSH to be available...
	I0729 14:38:49.793243 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.793646 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:49.793679 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.793820 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Using SSH client type: external
	I0729 14:38:49.793850 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa (-rw-------)
	I0729 14:38:49.793884 1038758 main.go:141] libmachine: (no-preload-603534) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:38:49.793899 1038758 main.go:141] libmachine: (no-preload-603534) DBG | About to run SSH command:
	I0729 14:38:49.793961 1038758 main.go:141] libmachine: (no-preload-603534) DBG | exit 0
	I0729 14:38:49.924827 1038758 main.go:141] libmachine: (no-preload-603534) DBG | SSH cmd err, output: <nil>: 
	I0729 14:38:49.925188 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetConfigRaw
	I0729 14:38:49.925835 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetIP
	I0729 14:38:49.928349 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.928799 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:49.928830 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.929091 1038758 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/config.json ...
	I0729 14:38:49.929313 1038758 machine.go:94] provisionDockerMachine start ...
	I0729 14:38:49.929334 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:49.929556 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:49.932040 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.932431 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:49.932473 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.932629 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:49.932798 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:49.932930 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:49.933033 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:49.933142 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:49.933313 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:49.933324 1038758 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:38:50.049016 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 14:38:50.049059 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:38:50.049328 1038758 buildroot.go:166] provisioning hostname "no-preload-603534"
	I0729 14:38:50.049354 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:38:50.049566 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.052138 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.052532 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.052561 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.052736 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.052918 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.053093 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.053269 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.053462 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:50.053641 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:50.053653 1038758 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-603534 && echo "no-preload-603534" | sudo tee /etc/hostname
	I0729 14:38:50.189302 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-603534
	
	I0729 14:38:50.189341 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.192559 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.192949 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.192974 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.193248 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.193476 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.193689 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.193870 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.194082 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:50.194305 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:50.194329 1038758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-603534' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-603534/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-603534' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:38:50.322506 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:38:50.322540 1038758 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:38:50.322564 1038758 buildroot.go:174] setting up certificates
	I0729 14:38:50.322577 1038758 provision.go:84] configureAuth start
	I0729 14:38:50.322589 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:38:50.322938 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetIP
	I0729 14:38:50.325594 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.325957 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.325994 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.326139 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.328455 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.328803 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.328828 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.328950 1038758 provision.go:143] copyHostCerts
	I0729 14:38:50.329015 1038758 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:38:50.329025 1038758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:38:50.329078 1038758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:38:50.329165 1038758 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:38:50.329173 1038758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:38:50.329192 1038758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:38:50.329243 1038758 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:38:50.329249 1038758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:38:50.329264 1038758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:38:50.329310 1038758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.no-preload-603534 san=[127.0.0.1 192.168.61.116 localhost minikube no-preload-603534]
	I0729 14:38:50.447706 1038758 provision.go:177] copyRemoteCerts
	I0729 14:38:50.447777 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:38:50.447810 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.450714 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.451106 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.451125 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.451444 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.451679 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.451855 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.451975 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:38:50.539025 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:38:50.567887 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:38:50.594581 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 14:38:50.619475 1038758 provision.go:87] duration metric: took 296.880769ms to configureAuth
	I0729 14:38:50.619509 1038758 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:38:50.619708 1038758 config.go:182] Loaded profile config "no-preload-603534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 14:38:50.619797 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.622753 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.623121 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.623151 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.623331 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.623519 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.623684 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.623813 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.623971 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:50.624151 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:50.624168 1038758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:38:50.895618 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:38:50.895649 1038758 machine.go:97] duration metric: took 966.320375ms to provisionDockerMachine
	I0729 14:38:50.895662 1038758 start.go:293] postStartSetup for "no-preload-603534" (driver="kvm2")
	I0729 14:38:50.895684 1038758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:38:50.895717 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:50.896084 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:38:50.896112 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.899586 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.899998 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.900031 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.900168 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.900424 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.900622 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.900799 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:38:50.987195 1038758 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:38:50.991924 1038758 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:38:50.991952 1038758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:38:50.992025 1038758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:38:50.992111 1038758 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:38:50.992208 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:38:51.002048 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:51.029714 1038758 start.go:296] duration metric: took 134.037621ms for postStartSetup
	I0729 14:38:51.029758 1038758 fix.go:56] duration metric: took 19.66799406s for fixHost
	I0729 14:38:51.029782 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:51.032495 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.032819 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:51.032844 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.033049 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:51.033236 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:51.033377 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:51.033587 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:51.033767 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:51.034007 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:51.034021 1038758 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:38:51.149481 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263931.130931233
	
	I0729 14:38:51.149510 1038758 fix.go:216] guest clock: 1722263931.130931233
	I0729 14:38:51.149520 1038758 fix.go:229] Guest: 2024-07-29 14:38:51.130931233 +0000 UTC Remote: 2024-07-29 14:38:51.029761931 +0000 UTC m=+354.409484230 (delta=101.169302ms)
	I0729 14:38:51.149575 1038758 fix.go:200] guest clock delta is within tolerance: 101.169302ms
	I0729 14:38:51.149583 1038758 start.go:83] releasing machines lock for "no-preload-603534", held for 19.787859214s
	I0729 14:38:51.149617 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:51.149923 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetIP
	I0729 14:38:51.152671 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.153054 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:51.153081 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.153298 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:51.153898 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:51.154092 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:51.154192 1038758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:38:51.154245 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:51.154349 1038758 ssh_runner.go:195] Run: cat /version.json
	I0729 14:38:51.154378 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:51.157173 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.157200 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.157560 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:51.157592 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.157635 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:51.157654 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.157955 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:51.157976 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:51.158169 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:51.158195 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:51.158370 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:51.158381 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:51.158565 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:38:51.158572 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:38:51.260806 1038758 ssh_runner.go:195] Run: systemctl --version
	I0729 14:38:51.266847 1038758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:38:51.412637 1038758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:38:51.418879 1038758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:38:51.418954 1038758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:38:51.435946 1038758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:38:51.435978 1038758 start.go:495] detecting cgroup driver to use...
	I0729 14:38:51.436061 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:38:51.457517 1038758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:38:51.472718 1038758 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:38:51.472811 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:38:51.487062 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:38:51.501410 1038758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:38:51.617292 1038758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:38:47.708063 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:49.708506 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:52.209337 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:51.764302 1038758 docker.go:233] disabling docker service ...
	I0729 14:38:51.764386 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:38:51.779137 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:38:51.794372 1038758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:38:51.930402 1038758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:38:52.062691 1038758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:38:52.076796 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:38:52.095912 1038758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 14:38:52.095994 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.107507 1038758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:38:52.107588 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.119470 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.131252 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.141672 1038758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:38:52.152086 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.163682 1038758 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.189614 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.200279 1038758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:38:52.211878 1038758 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:38:52.211943 1038758 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:38:52.224909 1038758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:38:52.234312 1038758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:52.357370 1038758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:38:52.492520 1038758 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:38:52.492622 1038758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:38:52.497537 1038758 start.go:563] Will wait 60s for crictl version
	I0729 14:38:52.497595 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:52.501292 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:38:52.544320 1038758 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:38:52.544428 1038758 ssh_runner.go:195] Run: crio --version
	I0729 14:38:52.575452 1038758 ssh_runner.go:195] Run: crio --version
	I0729 14:38:52.605920 1038758 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 14:38:49.156539 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:49.656397 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:50.156909 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:50.656968 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:51.156321 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:51.656183 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:52.157099 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:52.656725 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:53.157009 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:53.656787 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:51.796331 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:53.799083 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:52.607410 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetIP
	I0729 14:38:52.610017 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:52.610296 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:52.610330 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:52.610553 1038758 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 14:38:52.614659 1038758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:52.626967 1038758 kubeadm.go:883] updating cluster {Name:no-preload-603534 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-603534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:38:52.627087 1038758 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 14:38:52.627124 1038758 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:52.662824 1038758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 14:38:52.662852 1038758 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 14:38:52.662901 1038758 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:52.662968 1038758 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:52.663040 1038758 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 14:38:52.663043 1038758 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:52.663066 1038758 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:52.662987 1038758 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:52.662987 1038758 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:52.663017 1038758 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:52.664360 1038758 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 14:38:52.664947 1038758 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:52.664965 1038758 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:52.664954 1038758 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:52.665015 1038758 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:52.665023 1038758 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:52.665351 1038758 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:52.665423 1038758 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:52.829143 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:52.829158 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:52.829541 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:52.851797 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:52.866728 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 14:38:52.884604 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:52.893636 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:52.946087 1038758 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 14:38:52.946134 1038758 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 14:38:52.946160 1038758 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:52.946170 1038758 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:52.946173 1038758 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 14:38:52.946192 1038758 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:52.946216 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:52.946221 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:52.946217 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:52.954361 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:53.001715 1038758 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 14:38:53.001766 1038758 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:53.001826 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:53.106651 1038758 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 14:38:53.106713 1038758 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:53.106770 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:53.106838 1038758 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 14:38:53.106883 1038758 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:53.106921 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:53.106927 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:53.106962 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:53.107012 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:53.107038 1038758 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 14:38:53.107067 1038758 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:53.107079 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:53.107092 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:53.131562 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:53.212076 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:53.212199 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 14:38:53.212272 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 14:38:53.214338 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 14:38:53.214430 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 14:38:53.216771 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:53.216941 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 14:38:53.217037 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 14:38:53.220214 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 14:38:53.220306 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 14:38:53.272021 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 14:38:53.272140 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 14:38:53.275939 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 14:38:53.275988 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 14:38:53.276008 1038758 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 14:38:53.276009 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 14:38:53.276029 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 14:38:53.276054 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 14:38:53.301528 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 14:38:53.301578 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 14:38:53.301600 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 14:38:53.301647 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 14:38:53.301759 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 14:38:55.357295 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.08120738s)
	I0729 14:38:55.357329 1038758 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.081270007s)
	I0729 14:38:55.357371 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 14:38:55.357338 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 14:38:55.357384 1038758 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.055605102s)
	I0729 14:38:55.357406 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 14:38:55.357407 1038758 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 14:38:55.357464 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 14:38:54.708330 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:57.207468 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:54.156921 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:54.656957 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:55.156201 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:55.656783 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:56.156180 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:56.656984 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:57.156610 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:57.656127 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:58.156785 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:58.656192 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:56.295143 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:58.795511 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:57.217512 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.860011805s)
	I0729 14:38:57.217539 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 14:38:57.217570 1038758 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 14:38:57.217634 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 14:38:59.187398 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.969733063s)
	I0729 14:38:59.187443 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 14:38:59.187482 1038758 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 14:38:59.187562 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 14:39:01.138568 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.950970137s)
	I0729 14:39:01.138617 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 14:39:01.138654 1038758 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 14:39:01.138740 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 14:38:59.207657 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:01.208795 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:59.156740 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:59.656223 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:00.156726 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:00.656593 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:01.156115 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:01.656364 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:02.157069 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:02.656491 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:03.156938 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:03.656898 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:01.293858 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:03.484613 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:05.793953 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:04.231830 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.093043665s)
	I0729 14:39:04.231866 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 14:39:04.231897 1038758 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 14:39:04.231963 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 14:39:05.182458 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 14:39:05.182512 1038758 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 14:39:05.182566 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 14:39:03.209198 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:05.707557 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:04.157177 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:04.656505 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:05.156530 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:05.656389 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:06.156606 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:06.657121 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:07.157048 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:07.656497 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:08.156327 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:08.656868 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:07.794522 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:09.794886 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:07.253615 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.070972791s)
	I0729 14:39:07.253665 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 14:39:07.253700 1038758 cache_images.go:123] Successfully loaded all cached images
	I0729 14:39:07.253707 1038758 cache_images.go:92] duration metric: took 14.590842072s to LoadCachedImages
	I0729 14:39:07.253720 1038758 kubeadm.go:934] updating node { 192.168.61.116 8443 v1.31.0-beta.0 crio true true} ...
	I0729 14:39:07.253899 1038758 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-603534 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-603534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:39:07.253980 1038758 ssh_runner.go:195] Run: crio config
	I0729 14:39:07.309694 1038758 cni.go:84] Creating CNI manager for ""
	I0729 14:39:07.309720 1038758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:39:07.309731 1038758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:39:07.309754 1038758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-603534 NodeName:no-preload-603534 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 14:39:07.309916 1038758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-603534"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:39:07.309985 1038758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 14:39:07.321876 1038758 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:39:07.321967 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:39:07.333057 1038758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0729 14:39:07.350193 1038758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 14:39:07.367171 1038758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0729 14:39:07.384123 1038758 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0729 14:39:07.387896 1038758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:39:07.400317 1038758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:39:07.525822 1038758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:39:07.545142 1038758 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534 for IP: 192.168.61.116
	I0729 14:39:07.545167 1038758 certs.go:194] generating shared ca certs ...
	I0729 14:39:07.545189 1038758 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:39:07.545389 1038758 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:39:07.545448 1038758 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:39:07.545463 1038758 certs.go:256] generating profile certs ...
	I0729 14:39:07.545582 1038758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/client.key
	I0729 14:39:07.545658 1038758 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/apiserver.key.117a155a
	I0729 14:39:07.545725 1038758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/proxy-client.key
	I0729 14:39:07.545881 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:39:07.545913 1038758 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:39:07.545922 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:39:07.545945 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:39:07.545969 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:39:07.545990 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:39:07.546027 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:39:07.546679 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:39:07.582488 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:39:07.617327 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:39:07.647627 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:39:07.685799 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 14:39:07.720365 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 14:39:07.744627 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:39:07.771409 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 14:39:07.797570 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:39:07.820888 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:39:07.843714 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:39:07.867365 1038758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:39:07.884283 1038758 ssh_runner.go:195] Run: openssl version
	I0729 14:39:07.890379 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:39:07.901894 1038758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:39:07.906431 1038758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:39:07.906487 1038758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:39:07.912284 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:39:07.923393 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:39:07.934119 1038758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:39:07.938563 1038758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:39:07.938620 1038758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:39:07.944115 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:39:07.954815 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:39:07.965864 1038758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:39:07.970695 1038758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:39:07.970761 1038758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:39:07.977340 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:39:07.990416 1038758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:39:07.995446 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:39:08.001615 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:39:08.007621 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:39:08.013648 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:39:08.019525 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:39:08.025505 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:39:08.031480 1038758 kubeadm.go:392] StartCluster: {Name:no-preload-603534 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-603534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:39:08.031592 1038758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:39:08.031657 1038758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:39:08.077847 1038758 cri.go:89] found id: ""
	I0729 14:39:08.077936 1038758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:39:08.088616 1038758 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 14:39:08.088639 1038758 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 14:39:08.088704 1038758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 14:39:08.101150 1038758 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:39:08.102305 1038758 kubeconfig.go:125] found "no-preload-603534" server: "https://192.168.61.116:8443"
	I0729 14:39:08.105529 1038758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 14:39:08.117031 1038758 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.116
	I0729 14:39:08.117070 1038758 kubeadm.go:1160] stopping kube-system containers ...
	I0729 14:39:08.117085 1038758 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 14:39:08.117148 1038758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:39:08.171626 1038758 cri.go:89] found id: ""
	I0729 14:39:08.171706 1038758 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 14:39:08.190491 1038758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:39:08.200776 1038758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:39:08.200806 1038758 kubeadm.go:157] found existing configuration files:
	
	I0729 14:39:08.200873 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:39:08.211430 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:39:08.211483 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:39:08.221865 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:39:08.231668 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:39:08.231719 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:39:08.242027 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:39:08.251585 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:39:08.251639 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:39:08.261521 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:39:08.271210 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:39:08.271284 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:39:08.281112 1038758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:39:08.290948 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:08.417397 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:09.400064 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:09.590859 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:09.670134 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:09.781580 1038758 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:39:09.781719 1038758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.282592 1038758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.781923 1038758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.843114 1038758 api_server.go:72] duration metric: took 1.061535691s to wait for apiserver process to appear ...
	I0729 14:39:10.843151 1038758 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:39:10.843182 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:10.843715 1038758 api_server.go:269] stopped: https://192.168.61.116:8443/healthz: Get "https://192.168.61.116:8443/healthz": dial tcp 192.168.61.116:8443: connect: connection refused
	I0729 14:39:11.343301 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:08.207563 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:10.208276 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:09.156858 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:09.656910 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.156126 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.657149 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:11.156223 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:11.657184 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:12.156454 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:12.656896 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:13.156693 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:13.656971 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:13.993249 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:39:13.993278 1038758 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:39:13.993298 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:14.011972 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:39:14.012012 1038758 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:39:14.343228 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:14.347946 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:39:14.347983 1038758 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:39:14.844144 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:14.858278 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:39:14.858311 1038758 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:39:15.343885 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:15.350223 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0729 14:39:15.360468 1038758 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 14:39:15.360513 1038758 api_server.go:131] duration metric: took 4.517353977s to wait for apiserver health ...
	I0729 14:39:15.360524 1038758 cni.go:84] Creating CNI manager for ""
	I0729 14:39:15.360532 1038758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:39:15.362679 1038758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:39:12.293516 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:14.294107 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:15.364237 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:39:15.379974 1038758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:39:15.422444 1038758 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:39:15.441468 1038758 system_pods.go:59] 8 kube-system pods found
	I0729 14:39:15.441512 1038758 system_pods.go:61] "coredns-5cfdc65f69-tjdx4" [986cdef3-de61-4c0f-bc75-fae4f6b44a37] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 14:39:15.441525 1038758 system_pods.go:61] "etcd-no-preload-603534" [e27f5761-5322-4d88-b90a-bcff42c9dfa5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 14:39:15.441537 1038758 system_pods.go:61] "kube-apiserver-no-preload-603534" [33ed9f7c-1240-40cf-b51d-125b3473bfd5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 14:39:15.441547 1038758 system_pods.go:61] "kube-controller-manager-no-preload-603534" [f79520a2-380e-4d8a-b1ff-78c6cd3d3741] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 14:39:15.441559 1038758 system_pods.go:61] "kube-proxy-ftpk5" [a5471ad7-5fd3-49b7-8631-4ca2962761d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 14:39:15.441568 1038758 system_pods.go:61] "kube-scheduler-no-preload-603534" [860e262c-f036-4181-a0ad-8ba0058a47d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 14:39:15.441580 1038758 system_pods.go:61] "metrics-server-78fcd8795b-59sbc" [8af92987-ce8d-434f-93de-16d0adc35fa5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:39:15.441598 1038758 system_pods.go:61] "storage-provisioner" [579d0cc8-e30e-4ee3-ac55-c2f0bc5871e1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 14:39:15.441606 1038758 system_pods.go:74] duration metric: took 19.133029ms to wait for pod list to return data ...
	I0729 14:39:15.441618 1038758 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:39:15.445594 1038758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:39:15.445630 1038758 node_conditions.go:123] node cpu capacity is 2
	I0729 14:39:15.445646 1038758 node_conditions.go:105] duration metric: took 4.019018ms to run NodePressure ...
	I0729 14:39:15.445678 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:15.743404 1038758 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 14:39:15.751028 1038758 kubeadm.go:739] kubelet initialised
	I0729 14:39:15.751050 1038758 kubeadm.go:740] duration metric: took 7.619795ms waiting for restarted kubelet to initialise ...
	I0729 14:39:15.751059 1038758 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:39:15.759157 1038758 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:12.708704 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:15.208434 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:14.157127 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:14.656806 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:15.156564 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:15.656881 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:16.156239 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:16.656440 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:17.157130 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:17.656240 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:18.156161 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:18.656808 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:16.294741 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:18.797700 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:17.768132 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:20.265670 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:17.709929 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:20.206710 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:22.207809 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:19.156721 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:19.656766 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:20.156352 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:20.656788 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:21.156179 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:21.656213 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:22.156475 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:22.656275 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:23.156592 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:23.656979 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:21.294265 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:23.294366 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:25.794648 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:22.265947 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:24.266644 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:24.708214 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:27.208824 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:24.156798 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:24.656473 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:25.156551 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:25.656356 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:26.156887 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:26.656332 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:27.156494 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:27.656839 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:28.156763 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:28.656512 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:27.795415 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:30.293460 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:26.766260 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:29.265817 1038758 pod_ready.go:92] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.265851 1038758 pod_ready.go:81] duration metric: took 13.506661461s for pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.265865 1038758 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.276021 1038758 pod_ready.go:92] pod "etcd-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.276043 1038758 pod_ready.go:81] duration metric: took 10.172055ms for pod "etcd-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.276052 1038758 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.280197 1038758 pod_ready.go:92] pod "kube-apiserver-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.280215 1038758 pod_ready.go:81] duration metric: took 4.156785ms for pod "kube-apiserver-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.280223 1038758 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.284076 1038758 pod_ready.go:92] pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.284096 1038758 pod_ready.go:81] duration metric: took 3.865927ms for pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.284122 1038758 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ftpk5" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.288280 1038758 pod_ready.go:92] pod "kube-proxy-ftpk5" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.288297 1038758 pod_ready.go:81] duration metric: took 4.16843ms for pod "kube-proxy-ftpk5" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.288305 1038758 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.666771 1038758 pod_ready.go:92] pod "kube-scheduler-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.666802 1038758 pod_ready.go:81] duration metric: took 378.49001ms for pod "kube-scheduler-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.666813 1038758 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.706596 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:32.208095 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:29.156096 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:29.656289 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:30.156693 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:30.656795 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:31.156756 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:31.656888 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:32.156563 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:32.656795 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:33.156271 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:33.656562 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:32.293988 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:34.793456 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:31.674203 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:34.174002 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:34.708005 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:37.206789 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:34.157046 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:34.656398 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:35.156198 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:35.656763 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:36.156542 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:36.656994 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:37.156808 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:37.657093 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:38.156119 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:38.657017 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:36.793771 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:39.294267 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:36.676693 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:39.172713 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:41.174348 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:39.207584 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:41.707645 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:39.156909 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:39.656176 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:40.156455 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:40.656609 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:41.156891 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:41.656327 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:41.656423 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:41.701839 1039759 cri.go:89] found id: ""
	I0729 14:39:41.701863 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.701872 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:41.701878 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:41.701942 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:41.738281 1039759 cri.go:89] found id: ""
	I0729 14:39:41.738308 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.738315 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:41.738321 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:41.738377 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:41.771954 1039759 cri.go:89] found id: ""
	I0729 14:39:41.771981 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.771990 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:41.771996 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:41.772060 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:41.806157 1039759 cri.go:89] found id: ""
	I0729 14:39:41.806182 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.806190 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:41.806196 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:41.806249 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:41.841284 1039759 cri.go:89] found id: ""
	I0729 14:39:41.841312 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.841319 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:41.841325 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:41.841394 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:41.875864 1039759 cri.go:89] found id: ""
	I0729 14:39:41.875893 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.875902 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:41.875908 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:41.875962 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:41.909797 1039759 cri.go:89] found id: ""
	I0729 14:39:41.909824 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.909833 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:41.909840 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:41.909892 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:41.943886 1039759 cri.go:89] found id: ""
	I0729 14:39:41.943912 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.943920 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:41.943929 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:41.943944 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:41.983224 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:41.983254 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:42.035264 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:42.035303 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:42.049343 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:42.049369 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:42.171904 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:42.171924 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:42.171947 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:41.295209 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:43.795811 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:43.673853 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:45.674302 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:44.207555 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:46.707384 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:44.738629 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:44.753497 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:44.753582 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:44.793256 1039759 cri.go:89] found id: ""
	I0729 14:39:44.793283 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.793291 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:44.793298 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:44.793363 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:44.833698 1039759 cri.go:89] found id: ""
	I0729 14:39:44.833726 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.833733 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:44.833739 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:44.833792 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:44.876328 1039759 cri.go:89] found id: ""
	I0729 14:39:44.876357 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.876366 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:44.876372 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:44.876471 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:44.918091 1039759 cri.go:89] found id: ""
	I0729 14:39:44.918121 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.918132 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:44.918140 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:44.918210 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:44.965105 1039759 cri.go:89] found id: ""
	I0729 14:39:44.965137 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.965149 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:44.965157 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:44.965228 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:45.014119 1039759 cri.go:89] found id: ""
	I0729 14:39:45.014150 1039759 logs.go:276] 0 containers: []
	W0729 14:39:45.014162 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:45.014170 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:45.014243 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:45.059826 1039759 cri.go:89] found id: ""
	I0729 14:39:45.059858 1039759 logs.go:276] 0 containers: []
	W0729 14:39:45.059870 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:45.059879 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:45.059946 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:45.099666 1039759 cri.go:89] found id: ""
	I0729 14:39:45.099706 1039759 logs.go:276] 0 containers: []
	W0729 14:39:45.099717 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:45.099730 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:45.099748 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:45.144219 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:45.144263 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:45.199719 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:45.199754 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:45.214225 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:45.214260 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:45.289090 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:45.289119 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:45.289138 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:47.860797 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:47.874523 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:47.874606 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:47.913570 1039759 cri.go:89] found id: ""
	I0729 14:39:47.913599 1039759 logs.go:276] 0 containers: []
	W0729 14:39:47.913608 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:47.913615 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:47.913674 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:47.946699 1039759 cri.go:89] found id: ""
	I0729 14:39:47.946725 1039759 logs.go:276] 0 containers: []
	W0729 14:39:47.946734 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:47.946740 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:47.946792 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:47.986492 1039759 cri.go:89] found id: ""
	I0729 14:39:47.986533 1039759 logs.go:276] 0 containers: []
	W0729 14:39:47.986546 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:47.986554 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:47.986635 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:48.027232 1039759 cri.go:89] found id: ""
	I0729 14:39:48.027260 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.027268 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:48.027274 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:48.027327 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:48.065119 1039759 cri.go:89] found id: ""
	I0729 14:39:48.065145 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.065153 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:48.065159 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:48.065217 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:48.105087 1039759 cri.go:89] found id: ""
	I0729 14:39:48.105119 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.105128 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:48.105134 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:48.105199 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:48.144684 1039759 cri.go:89] found id: ""
	I0729 14:39:48.144718 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.144730 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:48.144737 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:48.144816 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:48.180350 1039759 cri.go:89] found id: ""
	I0729 14:39:48.180380 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.180389 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:48.180401 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:48.180436 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:48.259859 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:48.259905 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:48.301132 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:48.301163 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:48.352753 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:48.352795 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:48.365936 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:48.365969 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:48.434634 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:46.293123 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:48.293674 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:50.294113 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:47.674411 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:50.173727 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:48.707887 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:51.207444 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:50.934903 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:50.948702 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:50.948787 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:50.982889 1039759 cri.go:89] found id: ""
	I0729 14:39:50.982917 1039759 logs.go:276] 0 containers: []
	W0729 14:39:50.982927 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:50.982933 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:50.983010 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:51.020679 1039759 cri.go:89] found id: ""
	I0729 14:39:51.020713 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.020726 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:51.020734 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:51.020818 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:51.055114 1039759 cri.go:89] found id: ""
	I0729 14:39:51.055147 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.055158 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:51.055166 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:51.055237 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:51.089053 1039759 cri.go:89] found id: ""
	I0729 14:39:51.089087 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.089099 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:51.089108 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:51.089184 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:51.125823 1039759 cri.go:89] found id: ""
	I0729 14:39:51.125851 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.125861 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:51.125868 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:51.125938 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:51.162645 1039759 cri.go:89] found id: ""
	I0729 14:39:51.162683 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.162694 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:51.162702 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:51.162767 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:51.196820 1039759 cri.go:89] found id: ""
	I0729 14:39:51.196849 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.196857 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:51.196864 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:51.196937 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:51.236442 1039759 cri.go:89] found id: ""
	I0729 14:39:51.236469 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.236479 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:51.236491 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:51.236506 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:51.317077 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:51.317101 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:51.317119 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:51.398118 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:51.398172 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:51.437096 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:51.437128 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:51.488949 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:51.488992 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:52.795544 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:55.294184 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:52.174241 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:54.672702 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:53.207592 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:55.706971 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:54.004536 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:54.019400 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:54.019480 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:54.054592 1039759 cri.go:89] found id: ""
	I0729 14:39:54.054626 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.054639 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:54.054647 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:54.054712 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:54.090184 1039759 cri.go:89] found id: ""
	I0729 14:39:54.090217 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.090227 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:54.090234 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:54.090304 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:54.129977 1039759 cri.go:89] found id: ""
	I0729 14:39:54.130007 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.130016 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:54.130022 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:54.130081 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:54.170940 1039759 cri.go:89] found id: ""
	I0729 14:39:54.170970 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.170980 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:54.170988 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:54.171053 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:54.206197 1039759 cri.go:89] found id: ""
	I0729 14:39:54.206224 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.206244 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:54.206251 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:54.206340 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:54.246929 1039759 cri.go:89] found id: ""
	I0729 14:39:54.246963 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.246973 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:54.246980 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:54.247049 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:54.286202 1039759 cri.go:89] found id: ""
	I0729 14:39:54.286231 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.286240 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:54.286245 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:54.286315 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:54.321784 1039759 cri.go:89] found id: ""
	I0729 14:39:54.321815 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.321824 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:54.321837 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:54.321860 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:54.363159 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:54.363187 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:54.416151 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:54.416194 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:54.429824 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:54.429852 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:54.506348 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:54.506373 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:54.506390 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:57.094810 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:57.108163 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:57.108238 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:57.143556 1039759 cri.go:89] found id: ""
	I0729 14:39:57.143588 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.143601 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:57.143608 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:57.143678 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:57.177664 1039759 cri.go:89] found id: ""
	I0729 14:39:57.177695 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.177706 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:57.177714 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:57.177801 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:57.212046 1039759 cri.go:89] found id: ""
	I0729 14:39:57.212106 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.212231 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:57.212249 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:57.212323 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:57.252518 1039759 cri.go:89] found id: ""
	I0729 14:39:57.252549 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.252558 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:57.252564 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:57.252677 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:57.287045 1039759 cri.go:89] found id: ""
	I0729 14:39:57.287069 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.287077 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:57.287084 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:57.287141 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:57.324553 1039759 cri.go:89] found id: ""
	I0729 14:39:57.324588 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.324599 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:57.324607 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:57.324684 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:57.358761 1039759 cri.go:89] found id: ""
	I0729 14:39:57.358801 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.358811 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:57.358819 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:57.358898 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:57.402023 1039759 cri.go:89] found id: ""
	I0729 14:39:57.402051 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.402062 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:57.402074 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:57.402094 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:57.445600 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:57.445632 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:57.501876 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:57.501911 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:57.518264 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:57.518299 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:57.593247 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:57.593274 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:57.593292 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:57.793782 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:59.794287 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:56.673243 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:59.174416 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:57.707618 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:00.208574 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:00.181109 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:00.194553 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:00.194641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:00.237761 1039759 cri.go:89] found id: ""
	I0729 14:40:00.237801 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.237814 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:00.237829 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:00.237901 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:00.273113 1039759 cri.go:89] found id: ""
	I0729 14:40:00.273145 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.273157 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:00.273166 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:00.273232 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:00.312136 1039759 cri.go:89] found id: ""
	I0729 14:40:00.312169 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.312176 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:00.312182 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:00.312249 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:00.349610 1039759 cri.go:89] found id: ""
	I0729 14:40:00.349642 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.349654 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:00.349662 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:00.349792 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:00.384121 1039759 cri.go:89] found id: ""
	I0729 14:40:00.384148 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.384157 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:00.384163 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:00.384233 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:00.419684 1039759 cri.go:89] found id: ""
	I0729 14:40:00.419720 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.419731 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:00.419739 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:00.419809 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:00.453905 1039759 cri.go:89] found id: ""
	I0729 14:40:00.453937 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.453945 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:00.453951 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:00.454023 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:00.490124 1039759 cri.go:89] found id: ""
	I0729 14:40:00.490149 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.490158 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:00.490168 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:00.490185 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:00.562684 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:00.562713 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:00.562735 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:00.643860 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:00.643899 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:00.683247 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:00.683276 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:00.734131 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:00.734166 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:03.249468 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:03.262712 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:03.262788 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:03.300774 1039759 cri.go:89] found id: ""
	I0729 14:40:03.300801 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.300816 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:03.300823 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:03.300891 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:03.335367 1039759 cri.go:89] found id: ""
	I0729 14:40:03.335398 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.335409 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:03.335419 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:03.335488 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:03.375683 1039759 cri.go:89] found id: ""
	I0729 14:40:03.375717 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.375728 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:03.375734 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:03.375814 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:03.409593 1039759 cri.go:89] found id: ""
	I0729 14:40:03.409623 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.409631 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:03.409637 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:03.409711 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:03.444531 1039759 cri.go:89] found id: ""
	I0729 14:40:03.444566 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.444578 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:03.444585 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:03.444655 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:03.479446 1039759 cri.go:89] found id: ""
	I0729 14:40:03.479476 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.479487 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:03.479495 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:03.479563 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:03.517277 1039759 cri.go:89] found id: ""
	I0729 14:40:03.517311 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.517321 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:03.517329 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:03.517396 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:03.556343 1039759 cri.go:89] found id: ""
	I0729 14:40:03.556373 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.556381 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:03.556391 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:03.556422 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:03.610156 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:03.610196 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:03.624776 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:03.624812 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:03.696584 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:03.696609 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:03.696625 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:03.775066 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:03.775109 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:01.794683 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:03.795112 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:01.673980 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:04.173900 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:02.706731 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:04.707655 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:07.207027 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:06.319720 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:06.332865 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:06.332937 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:06.366576 1039759 cri.go:89] found id: ""
	I0729 14:40:06.366608 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.366631 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:06.366639 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:06.366730 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:06.402710 1039759 cri.go:89] found id: ""
	I0729 14:40:06.402735 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.402743 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:06.402748 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:06.402804 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:06.439048 1039759 cri.go:89] found id: ""
	I0729 14:40:06.439095 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.439116 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:06.439125 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:06.439196 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:06.473407 1039759 cri.go:89] found id: ""
	I0729 14:40:06.473443 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.473456 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:06.473464 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:06.473544 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:06.507278 1039759 cri.go:89] found id: ""
	I0729 14:40:06.507309 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.507319 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:06.507327 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:06.507396 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:06.541573 1039759 cri.go:89] found id: ""
	I0729 14:40:06.541600 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.541608 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:06.541617 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:06.541679 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:06.587666 1039759 cri.go:89] found id: ""
	I0729 14:40:06.587697 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.587707 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:06.587714 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:06.587773 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:06.622415 1039759 cri.go:89] found id: ""
	I0729 14:40:06.622448 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.622459 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:06.622478 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:06.622497 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:06.659987 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:06.660019 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:06.716303 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:06.716338 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:06.731051 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:06.731076 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:06.809014 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:06.809045 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:06.809064 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:06.293552 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:08.294453 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:10.295216 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:06.674445 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:09.174349 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:09.207784 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:11.208318 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:09.387843 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:09.401894 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:09.401984 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:09.439385 1039759 cri.go:89] found id: ""
	I0729 14:40:09.439425 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.439438 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:09.439446 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:09.439506 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:09.474307 1039759 cri.go:89] found id: ""
	I0729 14:40:09.474340 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.474352 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:09.474361 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:09.474434 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:09.508198 1039759 cri.go:89] found id: ""
	I0729 14:40:09.508233 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.508245 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:09.508253 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:09.508325 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:09.543729 1039759 cri.go:89] found id: ""
	I0729 14:40:09.543762 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.543772 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:09.543779 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:09.543847 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:09.598723 1039759 cri.go:89] found id: ""
	I0729 14:40:09.598760 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.598769 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:09.598775 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:09.598841 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:09.636009 1039759 cri.go:89] found id: ""
	I0729 14:40:09.636038 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.636050 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:09.636058 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:09.636126 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:09.675590 1039759 cri.go:89] found id: ""
	I0729 14:40:09.675618 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.675628 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:09.675636 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:09.675698 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:09.710331 1039759 cri.go:89] found id: ""
	I0729 14:40:09.710374 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.710385 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:09.710397 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:09.710416 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:09.790014 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:09.790046 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:09.790064 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:09.870233 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:09.870278 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:09.910421 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:09.910454 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:09.962429 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:09.962474 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:12.476775 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:12.490804 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:12.490875 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:12.529435 1039759 cri.go:89] found id: ""
	I0729 14:40:12.529466 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.529478 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:12.529485 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:12.529551 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:12.564769 1039759 cri.go:89] found id: ""
	I0729 14:40:12.564806 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.564818 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:12.564826 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:12.564912 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:12.600253 1039759 cri.go:89] found id: ""
	I0729 14:40:12.600285 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.600296 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:12.600304 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:12.600367 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:12.636112 1039759 cri.go:89] found id: ""
	I0729 14:40:12.636146 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.636155 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:12.636161 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:12.636216 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:12.675592 1039759 cri.go:89] found id: ""
	I0729 14:40:12.675621 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.675631 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:12.675639 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:12.675711 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:12.711438 1039759 cri.go:89] found id: ""
	I0729 14:40:12.711469 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.711480 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:12.711488 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:12.711554 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:12.745497 1039759 cri.go:89] found id: ""
	I0729 14:40:12.745524 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.745533 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:12.745539 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:12.745598 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:12.778934 1039759 cri.go:89] found id: ""
	I0729 14:40:12.778966 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.778977 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:12.778991 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:12.779010 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:12.854721 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:12.854759 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:12.854780 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:12.932118 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:12.932158 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:12.974429 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:12.974461 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:13.030073 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:13.030108 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:12.795056 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:15.295125 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:11.674169 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:14.173503 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:16.175691 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:13.707268 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:15.708540 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:15.544245 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:15.559013 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:15.559090 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:15.594018 1039759 cri.go:89] found id: ""
	I0729 14:40:15.594051 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.594064 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:15.594076 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:15.594147 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:15.630734 1039759 cri.go:89] found id: ""
	I0729 14:40:15.630762 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.630771 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:15.630777 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:15.630832 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:15.666159 1039759 cri.go:89] found id: ""
	I0729 14:40:15.666191 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.666202 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:15.666210 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:15.666275 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:15.701058 1039759 cri.go:89] found id: ""
	I0729 14:40:15.701088 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.701098 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:15.701115 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:15.701170 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:15.737006 1039759 cri.go:89] found id: ""
	I0729 14:40:15.737040 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.737052 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:15.737066 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:15.737139 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:15.775678 1039759 cri.go:89] found id: ""
	I0729 14:40:15.775706 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.775718 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:15.775728 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:15.775795 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:15.812239 1039759 cri.go:89] found id: ""
	I0729 14:40:15.812268 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.812276 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:15.812283 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:15.812348 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:15.847653 1039759 cri.go:89] found id: ""
	I0729 14:40:15.847682 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.847693 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:15.847707 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:15.847725 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:15.903094 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:15.903137 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:15.917060 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:15.917093 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:15.993458 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:15.993481 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:15.993499 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:16.073369 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:16.073409 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:18.616107 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:18.630263 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:18.630340 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:18.668228 1039759 cri.go:89] found id: ""
	I0729 14:40:18.668261 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.668271 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:18.668279 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:18.668342 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:18.706863 1039759 cri.go:89] found id: ""
	I0729 14:40:18.706891 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.706902 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:18.706909 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:18.706978 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:18.739703 1039759 cri.go:89] found id: ""
	I0729 14:40:18.739728 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.739736 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:18.739742 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:18.739796 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:18.777025 1039759 cri.go:89] found id: ""
	I0729 14:40:18.777066 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.777077 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:18.777085 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:18.777158 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:18.814000 1039759 cri.go:89] found id: ""
	I0729 14:40:18.814026 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.814039 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:18.814051 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:18.814116 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:18.851027 1039759 cri.go:89] found id: ""
	I0729 14:40:18.851058 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.851069 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:18.851076 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:18.851151 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:17.796245 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:20.293964 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:18.673560 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:21.173099 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:18.207376 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:20.707629 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:18.903888 1039759 cri.go:89] found id: ""
	I0729 14:40:18.903920 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.903932 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:18.903941 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:18.904002 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:18.938756 1039759 cri.go:89] found id: ""
	I0729 14:40:18.938784 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.938791 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:18.938801 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:18.938814 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:18.988482 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:18.988520 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:19.002145 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:19.002177 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:19.085372 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:19.085397 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:19.085424 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:19.171294 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:19.171387 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:21.709578 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:21.722874 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:21.722941 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:21.768110 1039759 cri.go:89] found id: ""
	I0729 14:40:21.768139 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.768150 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:21.768156 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:21.768210 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:21.808853 1039759 cri.go:89] found id: ""
	I0729 14:40:21.808886 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.808897 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:21.808905 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:21.808974 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:21.843432 1039759 cri.go:89] found id: ""
	I0729 14:40:21.843472 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.843484 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:21.843493 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:21.843576 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:21.876497 1039759 cri.go:89] found id: ""
	I0729 14:40:21.876535 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.876547 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:21.876555 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:21.876633 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:21.911528 1039759 cri.go:89] found id: ""
	I0729 14:40:21.911556 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.911565 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:21.911571 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:21.911626 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:21.944514 1039759 cri.go:89] found id: ""
	I0729 14:40:21.944548 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.944560 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:21.944569 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:21.944641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:21.978113 1039759 cri.go:89] found id: ""
	I0729 14:40:21.978151 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.978162 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:21.978170 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:21.978233 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:22.012390 1039759 cri.go:89] found id: ""
	I0729 14:40:22.012438 1039759 logs.go:276] 0 containers: []
	W0729 14:40:22.012449 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:22.012461 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:22.012484 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:22.027921 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:22.027952 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:22.095087 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:22.095115 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:22.095132 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:22.178462 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:22.178495 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:22.220155 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:22.220188 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:22.794431 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:25.295391 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:23.174050 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:25.673437 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:22.708012 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:25.207491 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:24.771932 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:24.784764 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:24.784851 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:24.820445 1039759 cri.go:89] found id: ""
	I0729 14:40:24.820473 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.820485 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:24.820501 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:24.820569 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:24.854753 1039759 cri.go:89] found id: ""
	I0729 14:40:24.854786 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.854796 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:24.854802 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:24.854856 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:24.889200 1039759 cri.go:89] found id: ""
	I0729 14:40:24.889230 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.889242 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:24.889250 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:24.889312 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:24.932383 1039759 cri.go:89] found id: ""
	I0729 14:40:24.932435 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.932447 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:24.932454 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:24.932515 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:24.971830 1039759 cri.go:89] found id: ""
	I0729 14:40:24.971859 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.971871 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:24.971879 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:24.971936 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:25.014336 1039759 cri.go:89] found id: ""
	I0729 14:40:25.014374 1039759 logs.go:276] 0 containers: []
	W0729 14:40:25.014386 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:25.014397 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:25.014464 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:25.048131 1039759 cri.go:89] found id: ""
	I0729 14:40:25.048161 1039759 logs.go:276] 0 containers: []
	W0729 14:40:25.048171 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:25.048177 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:25.048232 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:25.089830 1039759 cri.go:89] found id: ""
	I0729 14:40:25.089866 1039759 logs.go:276] 0 containers: []
	W0729 14:40:25.089878 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:25.089893 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:25.089907 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:25.172078 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:25.172113 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:25.221629 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:25.221661 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:25.291761 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:25.291806 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:25.314521 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:25.314559 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:25.402738 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:27.903335 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:27.918335 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:27.918413 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:27.951929 1039759 cri.go:89] found id: ""
	I0729 14:40:27.951955 1039759 logs.go:276] 0 containers: []
	W0729 14:40:27.951966 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:27.951972 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:27.952029 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:27.986229 1039759 cri.go:89] found id: ""
	I0729 14:40:27.986266 1039759 logs.go:276] 0 containers: []
	W0729 14:40:27.986279 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:27.986287 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:27.986352 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:28.019467 1039759 cri.go:89] found id: ""
	I0729 14:40:28.019504 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.019517 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:28.019524 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:28.019590 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:28.053762 1039759 cri.go:89] found id: ""
	I0729 14:40:28.053790 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.053799 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:28.053806 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:28.053858 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:28.088947 1039759 cri.go:89] found id: ""
	I0729 14:40:28.088975 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.088989 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:28.088996 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:28.089070 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:28.130018 1039759 cri.go:89] found id: ""
	I0729 14:40:28.130052 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.130064 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:28.130072 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:28.130143 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:28.163682 1039759 cri.go:89] found id: ""
	I0729 14:40:28.163715 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.163725 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:28.163734 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:28.163802 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:28.199698 1039759 cri.go:89] found id: ""
	I0729 14:40:28.199732 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.199744 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:28.199757 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:28.199774 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:28.253735 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:28.253776 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:28.267786 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:28.267825 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:28.337218 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:28.337250 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:28.337265 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:28.419644 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:28.419688 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:27.793963 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:30.293775 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:28.172846 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:30.173544 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:27.707884 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:29.708174 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:32.206661 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:30.958707 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:30.972073 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:30.972146 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:31.016629 1039759 cri.go:89] found id: ""
	I0729 14:40:31.016662 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.016673 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:31.016681 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:31.016747 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:31.058891 1039759 cri.go:89] found id: ""
	I0729 14:40:31.058921 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.058930 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:31.058936 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:31.059004 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:31.096599 1039759 cri.go:89] found id: ""
	I0729 14:40:31.096633 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.096645 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:31.096654 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:31.096741 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:31.143525 1039759 cri.go:89] found id: ""
	I0729 14:40:31.143554 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.143562 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:31.143568 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:31.143628 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:31.180188 1039759 cri.go:89] found id: ""
	I0729 14:40:31.180220 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.180230 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:31.180239 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:31.180310 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:31.219995 1039759 cri.go:89] found id: ""
	I0729 14:40:31.220026 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.220037 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:31.220045 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:31.220108 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:31.254137 1039759 cri.go:89] found id: ""
	I0729 14:40:31.254182 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.254192 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:31.254200 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:31.254272 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:31.288065 1039759 cri.go:89] found id: ""
	I0729 14:40:31.288098 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.288109 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:31.288122 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:31.288137 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:31.341299 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:31.341338 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:31.355357 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:31.355387 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:31.427414 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:31.427439 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:31.427453 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:31.508372 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:31.508439 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:32.294256 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:34.295131 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:32.174315 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:34.674462 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:34.208183 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:36.707763 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:34.052770 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:34.066300 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:34.066366 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:34.104242 1039759 cri.go:89] found id: ""
	I0729 14:40:34.104278 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.104290 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:34.104299 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:34.104367 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:34.143092 1039759 cri.go:89] found id: ""
	I0729 14:40:34.143125 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.143137 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:34.143145 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:34.143216 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:34.177966 1039759 cri.go:89] found id: ""
	I0729 14:40:34.177993 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.178002 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:34.178011 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:34.178098 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:34.218325 1039759 cri.go:89] found id: ""
	I0729 14:40:34.218351 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.218361 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:34.218369 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:34.218437 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:34.256632 1039759 cri.go:89] found id: ""
	I0729 14:40:34.256665 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.256675 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:34.256683 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:34.256753 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:34.290713 1039759 cri.go:89] found id: ""
	I0729 14:40:34.290739 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.290747 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:34.290753 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:34.290816 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:34.331345 1039759 cri.go:89] found id: ""
	I0729 14:40:34.331378 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.331389 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:34.331397 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:34.331468 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:34.370184 1039759 cri.go:89] found id: ""
	I0729 14:40:34.370214 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.370226 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:34.370239 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:34.370256 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:34.448667 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:34.448709 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:34.492943 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:34.492974 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:34.548784 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:34.548827 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:34.565353 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:34.565389 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:34.639411 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:37.140039 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:37.153732 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:37.153806 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:37.189360 1039759 cri.go:89] found id: ""
	I0729 14:40:37.189389 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.189398 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:37.189404 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:37.189474 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:37.225790 1039759 cri.go:89] found id: ""
	I0729 14:40:37.225820 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.225831 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:37.225839 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:37.225914 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:37.261742 1039759 cri.go:89] found id: ""
	I0729 14:40:37.261772 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.261782 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:37.261791 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:37.261862 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:37.295791 1039759 cri.go:89] found id: ""
	I0729 14:40:37.295826 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.295835 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:37.295843 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:37.295908 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:37.331290 1039759 cri.go:89] found id: ""
	I0729 14:40:37.331324 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.331334 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:37.331343 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:37.331413 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:37.366150 1039759 cri.go:89] found id: ""
	I0729 14:40:37.366183 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.366195 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:37.366203 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:37.366273 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:37.400983 1039759 cri.go:89] found id: ""
	I0729 14:40:37.401019 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.401030 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:37.401038 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:37.401110 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:37.435333 1039759 cri.go:89] found id: ""
	I0729 14:40:37.435368 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.435379 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:37.435391 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:37.435407 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:37.488020 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:37.488057 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:37.501543 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:37.501573 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:37.576006 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:37.576033 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:37.576050 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:37.658600 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:37.658641 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:36.794615 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:38.795414 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:37.175174 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:39.674361 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:39.207946 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:41.707724 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:40.200763 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:40.216048 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:40.216121 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:40.253969 1039759 cri.go:89] found id: ""
	I0729 14:40:40.253996 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.254005 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:40.254012 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:40.254078 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:40.289557 1039759 cri.go:89] found id: ""
	I0729 14:40:40.289595 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.289608 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:40.289616 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:40.289698 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:40.329756 1039759 cri.go:89] found id: ""
	I0729 14:40:40.329799 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.329823 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:40.329833 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:40.329906 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:40.365281 1039759 cri.go:89] found id: ""
	I0729 14:40:40.365315 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.365327 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:40.365335 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:40.365403 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:40.401300 1039759 cri.go:89] found id: ""
	I0729 14:40:40.401327 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.401336 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:40.401342 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:40.401398 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:40.435679 1039759 cri.go:89] found id: ""
	I0729 14:40:40.435710 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.435719 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:40.435726 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:40.435781 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:40.475825 1039759 cri.go:89] found id: ""
	I0729 14:40:40.475851 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.475859 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:40.475866 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:40.475926 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:40.512153 1039759 cri.go:89] found id: ""
	I0729 14:40:40.512184 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.512191 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:40.512202 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:40.512215 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:40.563983 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:40.564022 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:40.578823 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:40.578853 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:40.650282 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:40.650311 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:40.650328 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:40.734933 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:40.734980 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:43.280095 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:43.294284 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:43.294361 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:43.328862 1039759 cri.go:89] found id: ""
	I0729 14:40:43.328890 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.328899 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:43.328905 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:43.328971 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:43.366321 1039759 cri.go:89] found id: ""
	I0729 14:40:43.366364 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.366376 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:43.366384 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:43.366459 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:43.400189 1039759 cri.go:89] found id: ""
	I0729 14:40:43.400220 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.400229 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:43.400235 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:43.400299 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:43.438521 1039759 cri.go:89] found id: ""
	I0729 14:40:43.438562 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.438582 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:43.438594 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:43.438665 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:43.473931 1039759 cri.go:89] found id: ""
	I0729 14:40:43.473958 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.473966 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:43.473972 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:43.474035 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:43.511460 1039759 cri.go:89] found id: ""
	I0729 14:40:43.511490 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.511497 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:43.511506 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:43.511563 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:43.547255 1039759 cri.go:89] found id: ""
	I0729 14:40:43.547290 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.547301 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:43.547309 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:43.547375 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:43.582384 1039759 cri.go:89] found id: ""
	I0729 14:40:43.582418 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.582428 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:43.582441 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:43.582459 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:43.595747 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:43.595780 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:43.665389 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:43.665413 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:43.665427 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:43.752669 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:43.752712 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:43.797239 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:43.797272 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:41.294242 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:43.294985 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:45.794449 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:42.173495 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:44.173830 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:44.207593 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:46.706855 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:46.352841 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:46.368204 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:46.368278 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:46.406661 1039759 cri.go:89] found id: ""
	I0729 14:40:46.406687 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.406695 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:46.406701 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:46.406761 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:46.443728 1039759 cri.go:89] found id: ""
	I0729 14:40:46.443760 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.443771 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:46.443778 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:46.443845 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:46.477632 1039759 cri.go:89] found id: ""
	I0729 14:40:46.477666 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.477677 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:46.477686 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:46.477754 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:46.512510 1039759 cri.go:89] found id: ""
	I0729 14:40:46.512538 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.512549 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:46.512557 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:46.512629 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:46.550803 1039759 cri.go:89] found id: ""
	I0729 14:40:46.550834 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.550843 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:46.550848 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:46.550914 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:46.591610 1039759 cri.go:89] found id: ""
	I0729 14:40:46.591640 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.591652 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:46.591661 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:46.591723 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:46.631090 1039759 cri.go:89] found id: ""
	I0729 14:40:46.631122 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.631132 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:46.631139 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:46.631199 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:46.670510 1039759 cri.go:89] found id: ""
	I0729 14:40:46.670542 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.670554 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:46.670573 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:46.670590 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:46.725560 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:46.725594 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:46.739348 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:46.739372 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:46.812850 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:46.812874 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:46.812892 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:46.892922 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:46.892964 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:47.795538 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:50.293685 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:46.674514 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:49.174577 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:48.708243 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:51.207168 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:49.438741 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:49.452505 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:49.452588 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:49.487294 1039759 cri.go:89] found id: ""
	I0729 14:40:49.487323 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.487331 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:49.487340 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:49.487407 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:49.521783 1039759 cri.go:89] found id: ""
	I0729 14:40:49.521816 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.521828 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:49.521836 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:49.521901 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:49.557039 1039759 cri.go:89] found id: ""
	I0729 14:40:49.557075 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.557086 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:49.557094 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:49.557162 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:49.590431 1039759 cri.go:89] found id: ""
	I0729 14:40:49.590462 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.590474 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:49.590494 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:49.590574 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:49.626230 1039759 cri.go:89] found id: ""
	I0729 14:40:49.626260 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.626268 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:49.626274 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:49.626339 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:49.662030 1039759 cri.go:89] found id: ""
	I0729 14:40:49.662060 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.662068 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:49.662075 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:49.662130 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:49.699988 1039759 cri.go:89] found id: ""
	I0729 14:40:49.700019 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.700035 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:49.700076 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:49.700144 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:49.736830 1039759 cri.go:89] found id: ""
	I0729 14:40:49.736864 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.736873 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:49.736882 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:49.736895 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:49.775670 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:49.775703 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:49.830820 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:49.830853 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:49.846374 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:49.846407 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:49.917475 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:49.917502 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:49.917520 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:52.499291 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:52.513571 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:52.513641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:52.547437 1039759 cri.go:89] found id: ""
	I0729 14:40:52.547474 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.547487 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:52.547495 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:52.547559 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:52.587664 1039759 cri.go:89] found id: ""
	I0729 14:40:52.587705 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.587718 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:52.587726 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:52.587799 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:52.630642 1039759 cri.go:89] found id: ""
	I0729 14:40:52.630670 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.630678 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:52.630684 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:52.630740 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:52.665978 1039759 cri.go:89] found id: ""
	I0729 14:40:52.666010 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.666022 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:52.666030 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:52.666103 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:52.701111 1039759 cri.go:89] found id: ""
	I0729 14:40:52.701140 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.701148 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:52.701155 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:52.701211 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:52.744219 1039759 cri.go:89] found id: ""
	I0729 14:40:52.744247 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.744257 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:52.744265 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:52.744329 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:52.781081 1039759 cri.go:89] found id: ""
	I0729 14:40:52.781113 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.781122 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:52.781128 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:52.781198 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:52.817938 1039759 cri.go:89] found id: ""
	I0729 14:40:52.817974 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.817985 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:52.817999 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:52.818016 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:52.895387 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:52.895416 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:52.895433 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:52.976313 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:52.976356 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:53.013814 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:53.013852 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:53.065901 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:53.065937 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:52.798083 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:55.293459 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:51.674103 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:54.174456 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:53.208082 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:55.707719 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:55.580590 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:55.595023 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:55.595108 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:55.631449 1039759 cri.go:89] found id: ""
	I0729 14:40:55.631479 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.631487 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:55.631494 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:55.631551 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:55.666245 1039759 cri.go:89] found id: ""
	I0729 14:40:55.666274 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.666283 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:55.666296 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:55.666364 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:55.706582 1039759 cri.go:89] found id: ""
	I0729 14:40:55.706611 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.706621 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:55.706629 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:55.706696 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:55.741930 1039759 cri.go:89] found id: ""
	I0729 14:40:55.741962 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.741973 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:55.741990 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:55.742058 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:55.781440 1039759 cri.go:89] found id: ""
	I0729 14:40:55.781475 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.781486 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:55.781494 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:55.781599 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:55.825329 1039759 cri.go:89] found id: ""
	I0729 14:40:55.825366 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.825377 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:55.825387 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:55.825466 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:55.860834 1039759 cri.go:89] found id: ""
	I0729 14:40:55.860866 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.860878 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:55.860886 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:55.860950 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:55.895460 1039759 cri.go:89] found id: ""
	I0729 14:40:55.895492 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.895502 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:55.895514 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:55.895531 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:55.951739 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:55.951781 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:55.965760 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:55.965792 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:56.044422 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:56.044458 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:56.044477 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:56.123669 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:56.123714 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:58.668279 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:58.682912 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:58.682974 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:58.718757 1039759 cri.go:89] found id: ""
	I0729 14:40:58.718787 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.718798 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:58.718807 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:58.718861 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:58.756986 1039759 cri.go:89] found id: ""
	I0729 14:40:58.757015 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.757025 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:58.757031 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:58.757092 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:58.797572 1039759 cri.go:89] found id: ""
	I0729 14:40:58.797600 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.797611 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:58.797620 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:58.797689 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:58.839410 1039759 cri.go:89] found id: ""
	I0729 14:40:58.839442 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.839453 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:58.839461 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:58.839523 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:57.293935 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:59.294805 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:56.673078 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:58.674177 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:01.173709 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:57.708051 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:00.207822 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:02.208128 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:58.874477 1039759 cri.go:89] found id: ""
	I0729 14:40:58.874508 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.874519 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:58.874528 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:58.874602 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:58.910248 1039759 cri.go:89] found id: ""
	I0729 14:40:58.910281 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.910296 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:58.910307 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:58.910368 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:58.944845 1039759 cri.go:89] found id: ""
	I0729 14:40:58.944879 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.944890 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:58.944896 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:58.944955 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:58.978818 1039759 cri.go:89] found id: ""
	I0729 14:40:58.978854 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.978867 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:58.978879 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:58.978898 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:59.018961 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:59.018993 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:59.069883 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:59.069920 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:59.083277 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:59.083304 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:59.159470 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:59.159494 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:59.159511 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:01.746915 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:01.759883 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:01.759949 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:01.796563 1039759 cri.go:89] found id: ""
	I0729 14:41:01.796589 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.796602 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:01.796608 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:01.796691 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:01.831464 1039759 cri.go:89] found id: ""
	I0729 14:41:01.831499 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.831511 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:01.831520 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:01.831586 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:01.868633 1039759 cri.go:89] found id: ""
	I0729 14:41:01.868660 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.868668 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:01.868674 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:01.868732 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:01.903154 1039759 cri.go:89] found id: ""
	I0729 14:41:01.903183 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.903194 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:01.903202 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:01.903272 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:01.938256 1039759 cri.go:89] found id: ""
	I0729 14:41:01.938292 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.938304 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:01.938312 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:01.938384 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:01.978117 1039759 cri.go:89] found id: ""
	I0729 14:41:01.978147 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.978159 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:01.978168 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:01.978242 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:02.014061 1039759 cri.go:89] found id: ""
	I0729 14:41:02.014089 1039759 logs.go:276] 0 containers: []
	W0729 14:41:02.014100 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:02.014108 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:02.014176 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:02.050133 1039759 cri.go:89] found id: ""
	I0729 14:41:02.050165 1039759 logs.go:276] 0 containers: []
	W0729 14:41:02.050177 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:02.050189 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:02.050206 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:02.101188 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:02.101253 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:02.114343 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:02.114369 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:02.190309 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:02.190338 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:02.190354 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:02.266895 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:02.266939 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:01.794976 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:04.295199 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:03.176713 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:05.673543 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:04.708032 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:07.207702 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:04.809474 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:04.824652 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:04.824725 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:04.858442 1039759 cri.go:89] found id: ""
	I0729 14:41:04.858474 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.858483 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:04.858490 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:04.858542 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:04.893199 1039759 cri.go:89] found id: ""
	I0729 14:41:04.893229 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.893237 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:04.893243 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:04.893297 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:04.929480 1039759 cri.go:89] found id: ""
	I0729 14:41:04.929512 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.929524 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:04.929532 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:04.929601 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:04.965097 1039759 cri.go:89] found id: ""
	I0729 14:41:04.965127 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.965139 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:04.965147 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:04.965228 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:05.003419 1039759 cri.go:89] found id: ""
	I0729 14:41:05.003449 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.003460 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:05.003467 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:05.003557 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:05.037408 1039759 cri.go:89] found id: ""
	I0729 14:41:05.037439 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.037451 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:05.037458 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:05.037527 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:05.072909 1039759 cri.go:89] found id: ""
	I0729 14:41:05.072942 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.072953 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:05.072961 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:05.073034 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:05.123731 1039759 cri.go:89] found id: ""
	I0729 14:41:05.123764 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.123776 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:05.123787 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:05.123802 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:05.188687 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:05.188732 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:05.204119 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:05.204160 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:05.294702 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:05.294732 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:05.294750 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:05.377412 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:05.377456 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:07.923437 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:07.937633 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:07.937711 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:07.976813 1039759 cri.go:89] found id: ""
	I0729 14:41:07.976850 1039759 logs.go:276] 0 containers: []
	W0729 14:41:07.976861 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:07.976872 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:07.976946 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:08.013051 1039759 cri.go:89] found id: ""
	I0729 14:41:08.013089 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.013100 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:08.013109 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:08.013177 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:08.047372 1039759 cri.go:89] found id: ""
	I0729 14:41:08.047404 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.047413 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:08.047420 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:08.047477 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:08.080555 1039759 cri.go:89] found id: ""
	I0729 14:41:08.080594 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.080607 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:08.080615 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:08.080684 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:08.117054 1039759 cri.go:89] found id: ""
	I0729 14:41:08.117087 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.117098 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:08.117106 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:08.117175 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:08.152270 1039759 cri.go:89] found id: ""
	I0729 14:41:08.152295 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.152303 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:08.152309 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:08.152373 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:08.188804 1039759 cri.go:89] found id: ""
	I0729 14:41:08.188830 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.188842 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:08.188848 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:08.188903 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:08.225101 1039759 cri.go:89] found id: ""
	I0729 14:41:08.225139 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.225151 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:08.225164 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:08.225182 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:08.278721 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:08.278759 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:08.293417 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:08.293453 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:08.371802 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:08.371825 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:08.371843 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:08.452233 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:08.452274 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:06.795598 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:09.294006 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:08.175147 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:10.673937 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:09.707777 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:12.208180 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:10.993379 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:11.007599 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:11.007668 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:11.045603 1039759 cri.go:89] found id: ""
	I0729 14:41:11.045652 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.045675 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:11.045683 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:11.045746 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:11.079682 1039759 cri.go:89] found id: ""
	I0729 14:41:11.079711 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.079722 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:11.079730 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:11.079797 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:11.122138 1039759 cri.go:89] found id: ""
	I0729 14:41:11.122167 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.122180 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:11.122185 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:11.122249 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:11.157416 1039759 cri.go:89] found id: ""
	I0729 14:41:11.157444 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.157452 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:11.157458 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:11.157514 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:11.198589 1039759 cri.go:89] found id: ""
	I0729 14:41:11.198631 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.198643 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:11.198652 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:11.198725 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:11.238329 1039759 cri.go:89] found id: ""
	I0729 14:41:11.238360 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.238369 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:11.238376 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:11.238442 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:11.273283 1039759 cri.go:89] found id: ""
	I0729 14:41:11.273313 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.273322 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:11.273328 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:11.273382 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:11.313927 1039759 cri.go:89] found id: ""
	I0729 14:41:11.313972 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.313984 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:11.313997 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:11.314014 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:11.366507 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:11.366546 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:11.380529 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:11.380566 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:11.451839 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:11.451862 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:11.451882 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:11.537109 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:11.537150 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:11.294967 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:13.793738 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:13.173482 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:15.673025 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:14.706708 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:16.707135 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:14.104794 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:14.117474 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:14.117541 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:14.154117 1039759 cri.go:89] found id: ""
	I0729 14:41:14.154151 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.154163 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:14.154171 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:14.154236 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:14.195762 1039759 cri.go:89] found id: ""
	I0729 14:41:14.195793 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.195804 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:14.195812 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:14.195875 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:14.231434 1039759 cri.go:89] found id: ""
	I0729 14:41:14.231460 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.231467 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:14.231474 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:14.231523 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:14.264802 1039759 cri.go:89] found id: ""
	I0729 14:41:14.264839 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.264851 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:14.264859 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:14.264932 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:14.300162 1039759 cri.go:89] found id: ""
	I0729 14:41:14.300184 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.300194 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:14.300202 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:14.300262 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:14.335351 1039759 cri.go:89] found id: ""
	I0729 14:41:14.335385 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.335396 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:14.335404 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:14.335468 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:14.370064 1039759 cri.go:89] found id: ""
	I0729 14:41:14.370096 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.370107 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:14.370115 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:14.370184 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:14.406506 1039759 cri.go:89] found id: ""
	I0729 14:41:14.406538 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.406549 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:14.406562 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:14.406579 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:14.445641 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:14.445681 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:14.496132 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:14.496165 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:14.509732 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:14.509767 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:14.581519 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:14.581541 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:14.581558 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:17.164487 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:17.178359 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:17.178447 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:17.213780 1039759 cri.go:89] found id: ""
	I0729 14:41:17.213869 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.213887 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:17.213896 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:17.213966 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:17.251006 1039759 cri.go:89] found id: ""
	I0729 14:41:17.251045 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.251056 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:17.251063 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:17.251135 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:17.306624 1039759 cri.go:89] found id: ""
	I0729 14:41:17.306654 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.306683 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:17.306691 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:17.306775 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:17.358882 1039759 cri.go:89] found id: ""
	I0729 14:41:17.358915 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.358927 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:17.358935 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:17.359008 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:17.408592 1039759 cri.go:89] found id: ""
	I0729 14:41:17.408620 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.408636 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:17.408642 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:17.408705 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:17.445201 1039759 cri.go:89] found id: ""
	I0729 14:41:17.445228 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.445236 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:17.445242 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:17.445305 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:17.477441 1039759 cri.go:89] found id: ""
	I0729 14:41:17.477483 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.477511 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:17.477518 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:17.477591 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:17.509148 1039759 cri.go:89] found id: ""
	I0729 14:41:17.509179 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.509190 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:17.509203 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:17.509220 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:17.559784 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:17.559823 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:17.574163 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:17.574199 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:17.644249 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:17.644277 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:17.644294 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:17.720652 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:17.720688 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:16.293977 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:18.793489 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:20.793760 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:17.674099 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:20.173742 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:18.707238 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:21.209948 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:20.261591 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:20.274649 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:20.274731 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:20.311561 1039759 cri.go:89] found id: ""
	I0729 14:41:20.311591 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.311600 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:20.311606 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:20.311668 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:20.350267 1039759 cri.go:89] found id: ""
	I0729 14:41:20.350300 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.350313 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:20.350322 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:20.350379 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:20.384183 1039759 cri.go:89] found id: ""
	I0729 14:41:20.384213 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.384220 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:20.384227 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:20.384288 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:20.422330 1039759 cri.go:89] found id: ""
	I0729 14:41:20.422358 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.422367 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:20.422373 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:20.422442 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:20.465537 1039759 cri.go:89] found id: ""
	I0729 14:41:20.465568 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.465577 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:20.465586 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:20.465663 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:20.507661 1039759 cri.go:89] found id: ""
	I0729 14:41:20.507691 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.507701 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:20.507710 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:20.507774 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:20.545830 1039759 cri.go:89] found id: ""
	I0729 14:41:20.545857 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.545866 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:20.545872 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:20.545936 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:20.586311 1039759 cri.go:89] found id: ""
	I0729 14:41:20.586345 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.586354 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:20.586364 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:20.586379 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:20.635183 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:20.635224 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:20.649660 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:20.649701 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:20.729588 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:20.729613 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:20.729632 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:20.811565 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:20.811605 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:23.354318 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:23.367784 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:23.367862 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:23.401929 1039759 cri.go:89] found id: ""
	I0729 14:41:23.401956 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.401965 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:23.401970 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:23.402033 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:23.437130 1039759 cri.go:89] found id: ""
	I0729 14:41:23.437161 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.437185 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:23.437205 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:23.437267 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:23.474029 1039759 cri.go:89] found id: ""
	I0729 14:41:23.474066 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.474078 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:23.474087 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:23.474159 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:23.506678 1039759 cri.go:89] found id: ""
	I0729 14:41:23.506714 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.506725 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:23.506732 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:23.506791 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:23.541578 1039759 cri.go:89] found id: ""
	I0729 14:41:23.541618 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.541628 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:23.541636 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:23.541709 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:23.575852 1039759 cri.go:89] found id: ""
	I0729 14:41:23.575883 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.575891 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:23.575898 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:23.575955 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:23.610611 1039759 cri.go:89] found id: ""
	I0729 14:41:23.610638 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.610646 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:23.610653 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:23.610717 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:23.650403 1039759 cri.go:89] found id: ""
	I0729 14:41:23.650429 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.650438 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:23.650448 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:23.650460 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:23.701856 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:23.701899 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:23.716925 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:23.716958 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:23.790678 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:23.790699 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:23.790717 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:23.873204 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:23.873242 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:22.794021 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:25.294289 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:22.173787 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:24.673139 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:23.708892 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:26.207121 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:26.414319 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:26.428069 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:26.428152 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:26.462538 1039759 cri.go:89] found id: ""
	I0729 14:41:26.462578 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.462590 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:26.462599 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:26.462687 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:26.496461 1039759 cri.go:89] found id: ""
	I0729 14:41:26.496501 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.496513 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:26.496521 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:26.496593 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:26.534152 1039759 cri.go:89] found id: ""
	I0729 14:41:26.534190 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.534203 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:26.534210 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:26.534273 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:26.572986 1039759 cri.go:89] found id: ""
	I0729 14:41:26.573016 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.573024 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:26.573030 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:26.573097 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:26.607330 1039759 cri.go:89] found id: ""
	I0729 14:41:26.607359 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.607370 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:26.607378 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:26.607445 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:26.643023 1039759 cri.go:89] found id: ""
	I0729 14:41:26.643056 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.643067 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:26.643078 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:26.643145 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:26.679820 1039759 cri.go:89] found id: ""
	I0729 14:41:26.679846 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.679856 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:26.679865 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:26.679930 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:26.716433 1039759 cri.go:89] found id: ""
	I0729 14:41:26.716462 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.716470 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:26.716480 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:26.716494 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:26.794508 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:26.794529 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:26.794542 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:26.876663 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:26.876701 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:26.917309 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:26.917343 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:26.969397 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:26.969436 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:27.294711 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:29.793946 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:26.679220 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:29.173259 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:31.175213 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:28.207613 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:30.707297 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:29.483935 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:29.497502 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:29.497585 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:29.532671 1039759 cri.go:89] found id: ""
	I0729 14:41:29.532698 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.532712 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:29.532719 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:29.532784 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:29.568058 1039759 cri.go:89] found id: ""
	I0729 14:41:29.568085 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.568096 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:29.568103 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:29.568176 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:29.601173 1039759 cri.go:89] found id: ""
	I0729 14:41:29.601206 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.601216 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:29.601225 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:29.601284 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:29.634333 1039759 cri.go:89] found id: ""
	I0729 14:41:29.634372 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.634384 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:29.634393 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:29.634460 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:29.669669 1039759 cri.go:89] found id: ""
	I0729 14:41:29.669698 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.669706 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:29.669712 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:29.669777 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:29.702847 1039759 cri.go:89] found id: ""
	I0729 14:41:29.702876 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.702886 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:29.702894 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:29.702960 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:29.740713 1039759 cri.go:89] found id: ""
	I0729 14:41:29.740743 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.740754 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:29.740762 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:29.740846 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:29.777795 1039759 cri.go:89] found id: ""
	I0729 14:41:29.777829 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.777841 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:29.777853 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:29.777869 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:29.858713 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:29.858758 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:29.896873 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:29.896914 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:29.946905 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:29.946945 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:29.960136 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:29.960170 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:30.035951 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:32.536130 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:32.549431 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:32.549501 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:32.586069 1039759 cri.go:89] found id: ""
	I0729 14:41:32.586098 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.586117 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:32.586125 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:32.586183 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:32.623094 1039759 cri.go:89] found id: ""
	I0729 14:41:32.623123 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.623132 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:32.623138 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:32.623205 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:32.658370 1039759 cri.go:89] found id: ""
	I0729 14:41:32.658406 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.658418 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:32.658426 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:32.658492 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:32.696436 1039759 cri.go:89] found id: ""
	I0729 14:41:32.696469 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.696478 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:32.696484 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:32.696551 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:32.731306 1039759 cri.go:89] found id: ""
	I0729 14:41:32.731340 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.731352 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:32.731361 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:32.731431 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:32.767049 1039759 cri.go:89] found id: ""
	I0729 14:41:32.767087 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.767098 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:32.767106 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:32.767179 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:32.805094 1039759 cri.go:89] found id: ""
	I0729 14:41:32.805126 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.805138 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:32.805147 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:32.805223 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:32.840088 1039759 cri.go:89] found id: ""
	I0729 14:41:32.840116 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.840125 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:32.840137 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:32.840155 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:32.854065 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:32.854095 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:32.921447 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:32.921477 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:32.921493 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:33.005086 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:33.005129 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:33.042555 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:33.042617 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:31.795000 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:34.293349 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:33.673734 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:35.674275 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:32.707849 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:35.210238 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:35.593173 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:35.605965 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:35.606031 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:35.639315 1039759 cri.go:89] found id: ""
	I0729 14:41:35.639355 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.639367 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:35.639374 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:35.639466 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:35.678657 1039759 cri.go:89] found id: ""
	I0729 14:41:35.678686 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.678695 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:35.678700 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:35.678764 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:35.714108 1039759 cri.go:89] found id: ""
	I0729 14:41:35.714136 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.714147 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:35.714155 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:35.714220 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:35.748793 1039759 cri.go:89] found id: ""
	I0729 14:41:35.748820 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.748831 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:35.748837 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:35.748891 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:35.788853 1039759 cri.go:89] found id: ""
	I0729 14:41:35.788884 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.788895 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:35.788903 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:35.788971 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:35.825032 1039759 cri.go:89] found id: ""
	I0729 14:41:35.825059 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.825067 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:35.825074 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:35.825126 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:35.859990 1039759 cri.go:89] found id: ""
	I0729 14:41:35.860022 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.860033 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:35.860041 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:35.860131 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:35.894318 1039759 cri.go:89] found id: ""
	I0729 14:41:35.894352 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.894364 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:35.894377 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:35.894393 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:35.907591 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:35.907617 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:35.975000 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:35.975023 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:35.975040 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:36.056188 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:36.056226 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:36.094569 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:36.094606 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:38.648685 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:38.661546 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:38.661612 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:38.698658 1039759 cri.go:89] found id: ""
	I0729 14:41:38.698692 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.698704 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:38.698711 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:38.698797 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:38.731239 1039759 cri.go:89] found id: ""
	I0729 14:41:38.731274 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.731282 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:38.731288 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:38.731341 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:38.766549 1039759 cri.go:89] found id: ""
	I0729 14:41:38.766583 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.766594 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:38.766602 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:38.766663 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:38.803347 1039759 cri.go:89] found id: ""
	I0729 14:41:38.803374 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.803385 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:38.803393 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:38.803467 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:38.840327 1039759 cri.go:89] found id: ""
	I0729 14:41:38.840363 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.840374 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:38.840384 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:38.840480 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:38.874181 1039759 cri.go:89] found id: ""
	I0729 14:41:38.874211 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.874219 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:38.874225 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:38.874293 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:36.297301 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:38.794975 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:38.173718 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:40.675880 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:37.707171 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:39.709125 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:42.206569 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:38.908642 1039759 cri.go:89] found id: ""
	I0729 14:41:38.908674 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.908686 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:38.908694 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:38.908762 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:38.945081 1039759 cri.go:89] found id: ""
	I0729 14:41:38.945107 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.945116 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:38.945126 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:38.945140 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:38.999792 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:38.999826 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:39.013396 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:39.013421 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:39.077975 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:39.077998 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:39.078016 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:39.169606 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:39.169654 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:41.716258 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:41.730508 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:41.730579 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:41.766457 1039759 cri.go:89] found id: ""
	I0729 14:41:41.766490 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.766498 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:41.766505 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:41.766571 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:41.801073 1039759 cri.go:89] found id: ""
	I0729 14:41:41.801099 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.801109 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:41.801117 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:41.801178 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:41.836962 1039759 cri.go:89] found id: ""
	I0729 14:41:41.836986 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.836997 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:41.837005 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:41.837072 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:41.870169 1039759 cri.go:89] found id: ""
	I0729 14:41:41.870195 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.870205 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:41.870213 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:41.870274 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:41.902298 1039759 cri.go:89] found id: ""
	I0729 14:41:41.902323 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.902331 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:41.902337 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:41.902387 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:41.935394 1039759 cri.go:89] found id: ""
	I0729 14:41:41.935429 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.935441 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:41.935449 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:41.935513 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:41.972397 1039759 cri.go:89] found id: ""
	I0729 14:41:41.972437 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.972448 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:41.972456 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:41.972525 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:42.006477 1039759 cri.go:89] found id: ""
	I0729 14:41:42.006503 1039759 logs.go:276] 0 containers: []
	W0729 14:41:42.006513 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:42.006526 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:42.006540 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:42.053853 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:42.053886 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:42.067143 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:42.067172 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:42.135406 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:42.135432 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:42.135449 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:42.212571 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:42.212603 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:41.293241 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:43.294160 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:45.793697 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:43.173087 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:45.174327 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:44.206854 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:46.707167 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:44.751283 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:44.764600 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:44.764688 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:44.800821 1039759 cri.go:89] found id: ""
	I0729 14:41:44.800850 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.800857 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:44.800863 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:44.800924 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:44.834638 1039759 cri.go:89] found id: ""
	I0729 14:41:44.834670 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.834680 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:44.834686 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:44.834744 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:44.870198 1039759 cri.go:89] found id: ""
	I0729 14:41:44.870225 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.870237 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:44.870245 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:44.870312 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:44.904588 1039759 cri.go:89] found id: ""
	I0729 14:41:44.904620 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.904631 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:44.904639 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:44.904713 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:44.939442 1039759 cri.go:89] found id: ""
	I0729 14:41:44.939467 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.939474 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:44.939480 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:44.939541 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:44.972771 1039759 cri.go:89] found id: ""
	I0729 14:41:44.972799 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.972808 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:44.972815 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:44.972888 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:45.007513 1039759 cri.go:89] found id: ""
	I0729 14:41:45.007540 1039759 logs.go:276] 0 containers: []
	W0729 14:41:45.007549 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:45.007557 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:45.007626 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:45.038752 1039759 cri.go:89] found id: ""
	I0729 14:41:45.038778 1039759 logs.go:276] 0 containers: []
	W0729 14:41:45.038787 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:45.038797 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:45.038821 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:45.089807 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:45.089838 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:45.103188 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:45.103221 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:45.174509 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:45.174532 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:45.174554 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:45.255288 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:45.255327 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:47.799207 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:47.814781 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:47.814866 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:47.855111 1039759 cri.go:89] found id: ""
	I0729 14:41:47.855143 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.855156 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:47.855164 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:47.855230 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:47.892542 1039759 cri.go:89] found id: ""
	I0729 14:41:47.892577 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.892589 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:47.892603 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:47.892674 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:47.933408 1039759 cri.go:89] found id: ""
	I0729 14:41:47.933439 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.933451 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:47.933458 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:47.933531 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:47.970397 1039759 cri.go:89] found id: ""
	I0729 14:41:47.970427 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.970439 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:47.970447 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:47.970514 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:48.006852 1039759 cri.go:89] found id: ""
	I0729 14:41:48.006880 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.006891 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:48.006899 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:48.006967 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:48.046766 1039759 cri.go:89] found id: ""
	I0729 14:41:48.046799 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.046811 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:48.046820 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:48.046893 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:48.084354 1039759 cri.go:89] found id: ""
	I0729 14:41:48.084380 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.084387 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:48.084393 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:48.084468 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:48.121526 1039759 cri.go:89] found id: ""
	I0729 14:41:48.121559 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.121571 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:48.121582 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:48.121606 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:48.136753 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:48.136784 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:48.206914 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:48.206942 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:48.206958 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:48.283843 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:48.283882 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:48.325845 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:48.325878 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:47.794096 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:50.295275 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:47.182903 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:49.672827 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:49.206572 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:51.206900 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:50.881346 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:50.894098 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:50.894177 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:50.927345 1039759 cri.go:89] found id: ""
	I0729 14:41:50.927375 1039759 logs.go:276] 0 containers: []
	W0729 14:41:50.927386 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:50.927399 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:50.927466 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:50.962700 1039759 cri.go:89] found id: ""
	I0729 14:41:50.962726 1039759 logs.go:276] 0 containers: []
	W0729 14:41:50.962734 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:50.962740 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:50.962804 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:50.997299 1039759 cri.go:89] found id: ""
	I0729 14:41:50.997334 1039759 logs.go:276] 0 containers: []
	W0729 14:41:50.997346 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:50.997354 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:50.997419 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:51.030157 1039759 cri.go:89] found id: ""
	I0729 14:41:51.030190 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.030202 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:51.030211 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:51.030288 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:51.063123 1039759 cri.go:89] found id: ""
	I0729 14:41:51.063151 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.063162 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:51.063170 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:51.063237 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:51.096772 1039759 cri.go:89] found id: ""
	I0729 14:41:51.096819 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.096830 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:51.096838 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:51.096912 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:51.131976 1039759 cri.go:89] found id: ""
	I0729 14:41:51.132004 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.132014 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:51.132022 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:51.132095 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:51.167560 1039759 cri.go:89] found id: ""
	I0729 14:41:51.167599 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.167610 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:51.167622 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:51.167640 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:51.229416 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:51.229455 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:51.243576 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:51.243604 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:51.311103 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:51.311123 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:51.311139 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:51.396369 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:51.396432 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:52.793981 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:55.294172 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:51.673945 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:54.173681 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:56.174098 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:53.207656 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:55.709310 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:53.942329 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:53.955960 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:53.956027 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:53.988039 1039759 cri.go:89] found id: ""
	I0729 14:41:53.988074 1039759 logs.go:276] 0 containers: []
	W0729 14:41:53.988085 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:53.988094 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:53.988162 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:54.020948 1039759 cri.go:89] found id: ""
	I0729 14:41:54.020981 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.020992 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:54.020999 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:54.021067 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:54.053716 1039759 cri.go:89] found id: ""
	I0729 14:41:54.053744 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.053752 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:54.053759 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:54.053811 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:54.092348 1039759 cri.go:89] found id: ""
	I0729 14:41:54.092378 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.092390 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:54.092398 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:54.092471 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:54.126114 1039759 cri.go:89] found id: ""
	I0729 14:41:54.126176 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.126189 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:54.126199 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:54.126316 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:54.162125 1039759 cri.go:89] found id: ""
	I0729 14:41:54.162157 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.162167 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:54.162174 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:54.162241 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:54.202407 1039759 cri.go:89] found id: ""
	I0729 14:41:54.202439 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.202448 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:54.202456 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:54.202522 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:54.238650 1039759 cri.go:89] found id: ""
	I0729 14:41:54.238684 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.238695 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:54.238704 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:54.238718 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:54.291200 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:54.291243 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:54.306381 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:54.306415 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:54.371355 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:54.371384 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:54.371399 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:54.455200 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:54.455237 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:56.994689 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:57.007893 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:57.007958 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:57.041775 1039759 cri.go:89] found id: ""
	I0729 14:41:57.041808 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.041820 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:57.041828 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:57.041894 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:57.075409 1039759 cri.go:89] found id: ""
	I0729 14:41:57.075442 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.075454 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:57.075462 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:57.075524 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:57.120963 1039759 cri.go:89] found id: ""
	I0729 14:41:57.121000 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.121011 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:57.121019 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:57.121088 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:57.164882 1039759 cri.go:89] found id: ""
	I0729 14:41:57.164912 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.164923 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:57.164932 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:57.165001 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:57.198511 1039759 cri.go:89] found id: ""
	I0729 14:41:57.198537 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.198545 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:57.198550 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:57.198604 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:57.238516 1039759 cri.go:89] found id: ""
	I0729 14:41:57.238544 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.238552 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:57.238559 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:57.238622 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:57.271823 1039759 cri.go:89] found id: ""
	I0729 14:41:57.271854 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.271865 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:57.271873 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:57.271937 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:57.308435 1039759 cri.go:89] found id: ""
	I0729 14:41:57.308460 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.308472 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:57.308483 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:57.308506 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:57.359783 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:57.359818 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:57.372669 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:57.372698 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:57.440979 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:57.441004 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:57.441018 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:57.520105 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:57.520139 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:57.295421 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:59.793704 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:58.673850 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:01.172547 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:58.207493 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:00.208108 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:02.208334 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:00.060542 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:00.076125 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:00.076192 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:00.113095 1039759 cri.go:89] found id: ""
	I0729 14:42:00.113129 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.113137 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:00.113150 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:00.113206 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:00.154104 1039759 cri.go:89] found id: ""
	I0729 14:42:00.154132 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.154139 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:00.154146 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:00.154202 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:00.190416 1039759 cri.go:89] found id: ""
	I0729 14:42:00.190443 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.190454 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:00.190462 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:00.190532 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:00.228138 1039759 cri.go:89] found id: ""
	I0729 14:42:00.228173 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.228185 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:00.228192 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:00.228261 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:00.265679 1039759 cri.go:89] found id: ""
	I0729 14:42:00.265706 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.265715 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:00.265721 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:00.265787 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:00.300283 1039759 cri.go:89] found id: ""
	I0729 14:42:00.300315 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.300333 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:00.300341 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:00.300433 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:00.339224 1039759 cri.go:89] found id: ""
	I0729 14:42:00.339255 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.339264 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:00.339270 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:00.339333 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:00.375780 1039759 cri.go:89] found id: ""
	I0729 14:42:00.375815 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.375826 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:00.375836 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:00.375851 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:00.425145 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:00.425190 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:00.438860 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:00.438891 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:00.512668 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:00.512695 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:00.512714 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:00.597083 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:00.597139 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:03.141962 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:03.156295 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:03.156372 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:03.192860 1039759 cri.go:89] found id: ""
	I0729 14:42:03.192891 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.192902 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:03.192911 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:03.192982 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:03.234078 1039759 cri.go:89] found id: ""
	I0729 14:42:03.234104 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.234113 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:03.234119 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:03.234171 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:03.268099 1039759 cri.go:89] found id: ""
	I0729 14:42:03.268124 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.268131 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:03.268138 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:03.268197 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:03.306470 1039759 cri.go:89] found id: ""
	I0729 14:42:03.306498 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.306507 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:03.306513 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:03.306596 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:03.341902 1039759 cri.go:89] found id: ""
	I0729 14:42:03.341933 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.341944 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:03.341952 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:03.342019 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:03.377235 1039759 cri.go:89] found id: ""
	I0729 14:42:03.377271 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.377282 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:03.377291 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:03.377355 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:03.411273 1039759 cri.go:89] found id: ""
	I0729 14:42:03.411308 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.411316 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:03.411322 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:03.411397 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:03.446482 1039759 cri.go:89] found id: ""
	I0729 14:42:03.446511 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.446519 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:03.446530 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:03.446545 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:03.460222 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:03.460262 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:03.548149 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:03.548175 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:03.548191 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:03.640563 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:03.640608 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:03.681685 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:03.681713 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:02.293412 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:04.793239 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:03.174082 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:05.674438 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:04.706798 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:06.707818 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:06.234967 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:06.249656 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:06.249726 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:06.284768 1039759 cri.go:89] found id: ""
	I0729 14:42:06.284798 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.284810 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:06.284822 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:06.284880 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:06.321109 1039759 cri.go:89] found id: ""
	I0729 14:42:06.321140 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.321150 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:06.321158 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:06.321229 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:06.357238 1039759 cri.go:89] found id: ""
	I0729 14:42:06.357269 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.357278 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:06.357284 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:06.357342 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:06.391613 1039759 cri.go:89] found id: ""
	I0729 14:42:06.391643 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.391653 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:06.391661 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:06.391726 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:06.428782 1039759 cri.go:89] found id: ""
	I0729 14:42:06.428813 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.428823 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:06.428831 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:06.428890 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:06.463558 1039759 cri.go:89] found id: ""
	I0729 14:42:06.463596 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.463607 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:06.463615 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:06.463683 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:06.500442 1039759 cri.go:89] found id: ""
	I0729 14:42:06.500474 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.500484 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:06.500501 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:06.500579 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:06.535589 1039759 cri.go:89] found id: ""
	I0729 14:42:06.535627 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.535638 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:06.535650 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:06.535668 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:06.584641 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:06.584676 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:06.597702 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:06.597737 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:06.664499 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:06.664537 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:06.664555 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:06.744808 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:06.744845 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:06.793853 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:09.294853 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:08.172993 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:10.174863 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:08.707874 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:11.209387 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:09.286151 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:09.307822 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:09.307892 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:09.369334 1039759 cri.go:89] found id: ""
	I0729 14:42:09.369363 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.369373 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:09.369381 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:09.369458 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:09.402302 1039759 cri.go:89] found id: ""
	I0729 14:42:09.402334 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.402345 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:09.402353 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:09.402423 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:09.436351 1039759 cri.go:89] found id: ""
	I0729 14:42:09.436380 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.436402 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:09.436429 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:09.436501 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:09.467735 1039759 cri.go:89] found id: ""
	I0729 14:42:09.467768 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.467780 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:09.467788 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:09.467849 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:09.503328 1039759 cri.go:89] found id: ""
	I0729 14:42:09.503355 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.503367 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:09.503376 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:09.503438 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:09.540012 1039759 cri.go:89] found id: ""
	I0729 14:42:09.540039 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.540047 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:09.540053 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:09.540106 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:09.576737 1039759 cri.go:89] found id: ""
	I0729 14:42:09.576801 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.576814 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:09.576822 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:09.576920 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:09.614624 1039759 cri.go:89] found id: ""
	I0729 14:42:09.614651 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.614659 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:09.614669 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:09.614684 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:09.650533 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:09.650580 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:09.709144 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:09.709175 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:09.724147 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:09.724173 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:09.790737 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:09.790760 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:09.790775 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:12.376968 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:12.390344 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:12.390409 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:12.424820 1039759 cri.go:89] found id: ""
	I0729 14:42:12.424849 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.424860 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:12.424876 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:12.424943 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:12.457444 1039759 cri.go:89] found id: ""
	I0729 14:42:12.457480 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.457492 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:12.457500 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:12.457561 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:12.490027 1039759 cri.go:89] found id: ""
	I0729 14:42:12.490058 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.490069 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:12.490077 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:12.490145 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:12.523229 1039759 cri.go:89] found id: ""
	I0729 14:42:12.523256 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.523265 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:12.523270 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:12.523321 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:12.557849 1039759 cri.go:89] found id: ""
	I0729 14:42:12.557875 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.557885 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:12.557891 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:12.557951 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:12.592943 1039759 cri.go:89] found id: ""
	I0729 14:42:12.592973 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.592982 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:12.592989 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:12.593059 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:12.626495 1039759 cri.go:89] found id: ""
	I0729 14:42:12.626531 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.626539 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:12.626557 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:12.626641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:12.663764 1039759 cri.go:89] found id: ""
	I0729 14:42:12.663793 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.663805 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:12.663818 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:12.663835 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:12.722521 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:12.722556 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:12.736476 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:12.736505 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:12.809582 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:12.809617 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:12.809637 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:12.890665 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:12.890712 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:11.793144 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:13.793447 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:15.794630 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:12.673257 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:15.173702 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:13.707929 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:15.707964 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:15.429702 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:15.443258 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:15.443340 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:15.477170 1039759 cri.go:89] found id: ""
	I0729 14:42:15.477198 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.477207 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:15.477212 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:15.477266 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:15.511614 1039759 cri.go:89] found id: ""
	I0729 14:42:15.511652 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.511665 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:15.511671 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:15.511739 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:15.548472 1039759 cri.go:89] found id: ""
	I0729 14:42:15.548501 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.548511 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:15.548519 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:15.548590 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:15.589060 1039759 cri.go:89] found id: ""
	I0729 14:42:15.589090 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.589102 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:15.589110 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:15.589185 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:15.622846 1039759 cri.go:89] found id: ""
	I0729 14:42:15.622873 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.622882 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:15.622887 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:15.622943 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:15.656193 1039759 cri.go:89] found id: ""
	I0729 14:42:15.656220 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.656229 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:15.656237 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:15.656307 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:15.691301 1039759 cri.go:89] found id: ""
	I0729 14:42:15.691336 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.691348 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:15.691357 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:15.691420 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:15.729923 1039759 cri.go:89] found id: ""
	I0729 14:42:15.729963 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.729974 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:15.729988 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:15.730004 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:15.783531 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:15.783569 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:15.799590 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:15.799619 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:15.874849 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:15.874886 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:15.874901 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:15.957384 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:15.957424 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:18.497035 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:18.511538 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:18.511616 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:18.550512 1039759 cri.go:89] found id: ""
	I0729 14:42:18.550552 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.550573 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:18.550582 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:18.550642 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:18.585910 1039759 cri.go:89] found id: ""
	I0729 14:42:18.585942 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.585954 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:18.585962 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:18.586031 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:18.619680 1039759 cri.go:89] found id: ""
	I0729 14:42:18.619712 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.619722 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:18.619730 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:18.619799 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:18.651559 1039759 cri.go:89] found id: ""
	I0729 14:42:18.651592 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.651604 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:18.651613 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:18.651688 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:18.686668 1039759 cri.go:89] found id: ""
	I0729 14:42:18.686693 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.686701 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:18.686711 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:18.686764 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:18.722832 1039759 cri.go:89] found id: ""
	I0729 14:42:18.722859 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.722869 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:18.722876 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:18.722927 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:18.758261 1039759 cri.go:89] found id: ""
	I0729 14:42:18.758289 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.758302 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:18.758310 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:18.758378 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:18.795190 1039759 cri.go:89] found id: ""
	I0729 14:42:18.795216 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.795227 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:18.795237 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:18.795251 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:18.835331 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:18.835366 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:17.796916 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:20.294082 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:17.673000 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:19.674010 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:18.209178 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:20.707421 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:18.889707 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:18.889745 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:18.902477 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:18.902503 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:18.970712 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:18.970735 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:18.970748 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:21.552092 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:21.566581 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:21.566669 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:21.600230 1039759 cri.go:89] found id: ""
	I0729 14:42:21.600261 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.600275 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:21.600283 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:21.600346 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:21.636576 1039759 cri.go:89] found id: ""
	I0729 14:42:21.636616 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.636627 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:21.636635 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:21.636705 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:21.672944 1039759 cri.go:89] found id: ""
	I0729 14:42:21.672973 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.672984 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:21.672997 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:21.673063 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:21.708555 1039759 cri.go:89] found id: ""
	I0729 14:42:21.708582 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.708601 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:21.708613 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:21.708673 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:21.744862 1039759 cri.go:89] found id: ""
	I0729 14:42:21.744891 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.744902 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:21.744908 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:21.744973 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:21.779084 1039759 cri.go:89] found id: ""
	I0729 14:42:21.779111 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.779119 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:21.779126 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:21.779183 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:21.819931 1039759 cri.go:89] found id: ""
	I0729 14:42:21.819972 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.819981 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:21.819989 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:21.820047 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:21.855472 1039759 cri.go:89] found id: ""
	I0729 14:42:21.855500 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.855509 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:21.855522 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:21.855539 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:21.925561 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:21.925579 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:21.925596 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:22.015986 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:22.016032 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:22.059898 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:22.059935 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:22.129018 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:22.129055 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:21.787886 1039263 pod_ready.go:81] duration metric: took 4m0.000465481s for pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace to be "Ready" ...
	E0729 14:42:21.787929 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0729 14:42:21.787945 1039263 pod_ready.go:38] duration metric: took 4m5.237036546s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:42:21.787973 1039263 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:42:21.788025 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:21.788089 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:21.857594 1039263 cri.go:89] found id: "0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:21.857613 1039263 cri.go:89] found id: ""
	I0729 14:42:21.857620 1039263 logs.go:276] 1 containers: [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8]
	I0729 14:42:21.857674 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:21.862462 1039263 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:21.862523 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:21.903562 1039263 cri.go:89] found id: "759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:21.903594 1039263 cri.go:89] found id: ""
	I0729 14:42:21.903604 1039263 logs.go:276] 1 containers: [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1]
	I0729 14:42:21.903660 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:21.908232 1039263 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:21.908327 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:21.947632 1039263 cri.go:89] found id: "cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:21.947663 1039263 cri.go:89] found id: ""
	I0729 14:42:21.947674 1039263 logs.go:276] 1 containers: [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d]
	I0729 14:42:21.947737 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:21.952576 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:21.952649 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:21.995318 1039263 cri.go:89] found id: "ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:21.995343 1039263 cri.go:89] found id: ""
	I0729 14:42:21.995351 1039263 logs.go:276] 1 containers: [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40]
	I0729 14:42:21.995418 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.000352 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:22.000440 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:22.040544 1039263 cri.go:89] found id: "1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:22.040572 1039263 cri.go:89] found id: ""
	I0729 14:42:22.040582 1039263 logs.go:276] 1 containers: [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b]
	I0729 14:42:22.040648 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.044840 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:22.044910 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:22.090787 1039263 cri.go:89] found id: "d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:22.090816 1039263 cri.go:89] found id: ""
	I0729 14:42:22.090827 1039263 logs.go:276] 1 containers: [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322]
	I0729 14:42:22.090897 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.096748 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:22.096826 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:22.143491 1039263 cri.go:89] found id: ""
	I0729 14:42:22.143522 1039263 logs.go:276] 0 containers: []
	W0729 14:42:22.143534 1039263 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:22.143541 1039263 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 14:42:22.143609 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 14:42:22.179378 1039263 cri.go:89] found id: "bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:22.179404 1039263 cri.go:89] found id: "40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:22.179409 1039263 cri.go:89] found id: ""
	I0729 14:42:22.179419 1039263 logs.go:276] 2 containers: [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4]
	I0729 14:42:22.179482 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.184686 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.189009 1039263 logs.go:123] Gathering logs for etcd [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1] ...
	I0729 14:42:22.189029 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:22.250475 1039263 logs.go:123] Gathering logs for kube-scheduler [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40] ...
	I0729 14:42:22.250510 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:22.286581 1039263 logs.go:123] Gathering logs for kube-proxy [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b] ...
	I0729 14:42:22.286622 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:22.325541 1039263 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:22.325570 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:22.831822 1039263 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:22.831875 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:22.846540 1039263 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:22.846588 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 14:42:22.970758 1039263 logs.go:123] Gathering logs for coredns [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d] ...
	I0729 14:42:22.970796 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:23.013428 1039263 logs.go:123] Gathering logs for kube-controller-manager [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322] ...
	I0729 14:42:23.013467 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:23.064784 1039263 logs.go:123] Gathering logs for storage-provisioner [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a] ...
	I0729 14:42:23.064820 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:23.111615 1039263 logs.go:123] Gathering logs for storage-provisioner [40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4] ...
	I0729 14:42:23.111653 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:23.151296 1039263 logs.go:123] Gathering logs for container status ...
	I0729 14:42:23.151328 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:23.198650 1039263 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:23.198692 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:23.259196 1039263 logs.go:123] Gathering logs for kube-apiserver [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8] ...
	I0729 14:42:23.259247 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:25.808980 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:25.829180 1039263 api_server.go:72] duration metric: took 4m16.997740137s to wait for apiserver process to appear ...
	I0729 14:42:25.829211 1039263 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:42:25.829260 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:25.829335 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:25.875138 1039263 cri.go:89] found id: "0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:25.875167 1039263 cri.go:89] found id: ""
	I0729 14:42:25.875175 1039263 logs.go:276] 1 containers: [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8]
	I0729 14:42:25.875230 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:25.879855 1039263 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:25.879937 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:25.916938 1039263 cri.go:89] found id: "759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:25.916964 1039263 cri.go:89] found id: ""
	I0729 14:42:25.916974 1039263 logs.go:276] 1 containers: [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1]
	I0729 14:42:25.917036 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:25.921166 1039263 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:25.921224 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:25.958196 1039263 cri.go:89] found id: "cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:25.958224 1039263 cri.go:89] found id: ""
	I0729 14:42:25.958234 1039263 logs.go:276] 1 containers: [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d]
	I0729 14:42:25.958300 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:25.962697 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:25.962760 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:26.000162 1039263 cri.go:89] found id: "ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:26.000195 1039263 cri.go:89] found id: ""
	I0729 14:42:26.000206 1039263 logs.go:276] 1 containers: [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40]
	I0729 14:42:26.000277 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.004518 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:26.004594 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:26.041099 1039263 cri.go:89] found id: "1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:26.041133 1039263 cri.go:89] found id: ""
	I0729 14:42:26.041144 1039263 logs.go:276] 1 containers: [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b]
	I0729 14:42:26.041208 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.045334 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:26.045412 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:26.082783 1039263 cri.go:89] found id: "d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:26.082815 1039263 cri.go:89] found id: ""
	I0729 14:42:26.082826 1039263 logs.go:276] 1 containers: [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322]
	I0729 14:42:26.082901 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.086996 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:26.087063 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:26.123636 1039263 cri.go:89] found id: ""
	I0729 14:42:26.123677 1039263 logs.go:276] 0 containers: []
	W0729 14:42:26.123688 1039263 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:26.123694 1039263 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 14:42:26.123756 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 14:42:26.163819 1039263 cri.go:89] found id: "bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:26.163849 1039263 cri.go:89] found id: "40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:26.163855 1039263 cri.go:89] found id: ""
	I0729 14:42:26.163864 1039263 logs.go:276] 2 containers: [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4]
	I0729 14:42:26.163929 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.168611 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.173125 1039263 logs.go:123] Gathering logs for kube-scheduler [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40] ...
	I0729 14:42:26.173155 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:22.173593 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:24.173621 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:22.708101 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:25.206661 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:27.207926 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:24.645474 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:24.658107 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:24.658171 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:24.696604 1039759 cri.go:89] found id: ""
	I0729 14:42:24.696635 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.696645 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:24.696653 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:24.696725 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:24.733862 1039759 cri.go:89] found id: ""
	I0729 14:42:24.733887 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.733894 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:24.733901 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:24.733957 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:24.770614 1039759 cri.go:89] found id: ""
	I0729 14:42:24.770644 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.770656 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:24.770664 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:24.770734 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:24.806368 1039759 cri.go:89] found id: ""
	I0729 14:42:24.806394 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.806403 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:24.806408 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:24.806470 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:24.838490 1039759 cri.go:89] found id: ""
	I0729 14:42:24.838526 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.838534 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:24.838541 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:24.838596 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:24.871017 1039759 cri.go:89] found id: ""
	I0729 14:42:24.871043 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.871051 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:24.871057 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:24.871128 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:24.903281 1039759 cri.go:89] found id: ""
	I0729 14:42:24.903311 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.903322 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:24.903330 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:24.903403 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:24.937245 1039759 cri.go:89] found id: ""
	I0729 14:42:24.937279 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.937291 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:24.937304 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:24.937319 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:24.989518 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:24.989551 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:25.005021 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:25.005055 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:25.080849 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:25.080877 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:25.080893 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:25.163742 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:25.163784 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:27.706182 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:27.719350 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:27.719425 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:27.756955 1039759 cri.go:89] found id: ""
	I0729 14:42:27.756982 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.756990 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:27.756997 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:27.757054 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:27.791975 1039759 cri.go:89] found id: ""
	I0729 14:42:27.792014 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.792025 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:27.792033 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:27.792095 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:27.834188 1039759 cri.go:89] found id: ""
	I0729 14:42:27.834215 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.834223 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:27.834230 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:27.834296 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:27.867798 1039759 cri.go:89] found id: ""
	I0729 14:42:27.867834 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.867843 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:27.867851 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:27.867918 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:27.900316 1039759 cri.go:89] found id: ""
	I0729 14:42:27.900343 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.900351 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:27.900357 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:27.900422 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:27.932361 1039759 cri.go:89] found id: ""
	I0729 14:42:27.932391 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.932402 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:27.932425 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:27.932493 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:27.965530 1039759 cri.go:89] found id: ""
	I0729 14:42:27.965562 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.965573 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:27.965581 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:27.965651 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:27.999582 1039759 cri.go:89] found id: ""
	I0729 14:42:27.999608 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.999617 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:27.999626 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:27.999654 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:28.069415 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:28.069438 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:28.069454 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:28.149781 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:28.149821 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:28.190045 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:28.190072 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:28.244147 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:28.244188 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:26.217755 1039263 logs.go:123] Gathering logs for storage-provisioner [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a] ...
	I0729 14:42:26.217796 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:26.257363 1039263 logs.go:123] Gathering logs for storage-provisioner [40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4] ...
	I0729 14:42:26.257399 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:26.297502 1039263 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:26.297534 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:26.729336 1039263 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:26.729370 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:26.779172 1039263 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:26.779213 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:26.794369 1039263 logs.go:123] Gathering logs for etcd [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1] ...
	I0729 14:42:26.794399 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:26.857964 1039263 logs.go:123] Gathering logs for coredns [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d] ...
	I0729 14:42:26.858000 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:26.895052 1039263 logs.go:123] Gathering logs for container status ...
	I0729 14:42:26.895083 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:26.936360 1039263 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:26.936395 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 14:42:27.037118 1039263 logs.go:123] Gathering logs for kube-apiserver [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8] ...
	I0729 14:42:27.037160 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:27.089764 1039263 logs.go:123] Gathering logs for kube-proxy [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b] ...
	I0729 14:42:27.089798 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:27.134009 1039263 logs.go:123] Gathering logs for kube-controller-manager [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322] ...
	I0729 14:42:27.134042 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:29.690960 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:42:29.696457 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 200:
	ok
	I0729 14:42:29.697313 1039263 api_server.go:141] control plane version: v1.30.3
	I0729 14:42:29.697335 1039263 api_server.go:131] duration metric: took 3.868117139s to wait for apiserver health ...
	I0729 14:42:29.697343 1039263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:42:29.697370 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:29.697430 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:29.740594 1039263 cri.go:89] found id: "0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:29.740623 1039263 cri.go:89] found id: ""
	I0729 14:42:29.740633 1039263 logs.go:276] 1 containers: [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8]
	I0729 14:42:29.740696 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.745183 1039263 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:29.745257 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:29.780091 1039263 cri.go:89] found id: "759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:29.780112 1039263 cri.go:89] found id: ""
	I0729 14:42:29.780119 1039263 logs.go:276] 1 containers: [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1]
	I0729 14:42:29.780178 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.784241 1039263 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:29.784305 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:29.825618 1039263 cri.go:89] found id: "cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:29.825641 1039263 cri.go:89] found id: ""
	I0729 14:42:29.825649 1039263 logs.go:276] 1 containers: [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d]
	I0729 14:42:29.825715 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.830291 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:29.830351 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:29.866651 1039263 cri.go:89] found id: "ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:29.866685 1039263 cri.go:89] found id: ""
	I0729 14:42:29.866695 1039263 logs.go:276] 1 containers: [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40]
	I0729 14:42:29.866758 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.871440 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:29.871494 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:29.911944 1039263 cri.go:89] found id: "1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:29.911968 1039263 cri.go:89] found id: ""
	I0729 14:42:29.911976 1039263 logs.go:276] 1 containers: [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b]
	I0729 14:42:29.912037 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.916604 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:29.916680 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:29.954334 1039263 cri.go:89] found id: "d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:29.954361 1039263 cri.go:89] found id: ""
	I0729 14:42:29.954371 1039263 logs.go:276] 1 containers: [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322]
	I0729 14:42:29.954446 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.959051 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:29.959130 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:29.996760 1039263 cri.go:89] found id: ""
	I0729 14:42:29.996795 1039263 logs.go:276] 0 containers: []
	W0729 14:42:29.996804 1039263 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:29.996812 1039263 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 14:42:29.996883 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 14:42:30.034562 1039263 cri.go:89] found id: "bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:30.034598 1039263 cri.go:89] found id: "40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:30.034604 1039263 cri.go:89] found id: ""
	I0729 14:42:30.034614 1039263 logs.go:276] 2 containers: [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4]
	I0729 14:42:30.034682 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:30.039588 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:30.043866 1039263 logs.go:123] Gathering logs for kube-apiserver [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8] ...
	I0729 14:42:30.043889 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:30.091309 1039263 logs.go:123] Gathering logs for etcd [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1] ...
	I0729 14:42:30.091349 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:30.149888 1039263 logs.go:123] Gathering logs for kube-scheduler [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40] ...
	I0729 14:42:30.149926 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:30.189441 1039263 logs.go:123] Gathering logs for kube-controller-manager [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322] ...
	I0729 14:42:30.189479 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:30.250850 1039263 logs.go:123] Gathering logs for storage-provisioner [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a] ...
	I0729 14:42:30.250890 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:30.290077 1039263 logs.go:123] Gathering logs for storage-provisioner [40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4] ...
	I0729 14:42:30.290111 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:30.329035 1039263 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:30.329068 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:30.383068 1039263 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:30.383113 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 14:42:30.497009 1039263 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:30.497045 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:30.914489 1039263 logs.go:123] Gathering logs for container status ...
	I0729 14:42:30.914534 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:30.972901 1039263 logs.go:123] Gathering logs for kube-proxy [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b] ...
	I0729 14:42:30.972951 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:31.021798 1039263 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:31.021838 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:31.040147 1039263 logs.go:123] Gathering logs for coredns [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d] ...
	I0729 14:42:31.040182 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:26.674294 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:29.173375 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:31.173588 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:29.710051 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:32.209382 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:33.593681 1039263 system_pods.go:59] 8 kube-system pods found
	I0729 14:42:33.593711 1039263 system_pods.go:61] "coredns-7db6d8ff4d-6dhzz" [c680e565-fe93-4072-8fe8-6fd440ae5675] Running
	I0729 14:42:33.593716 1039263 system_pods.go:61] "etcd-embed-certs-668123" [3244d6a8-3aa2-406a-86fe-9770f5b8541a] Running
	I0729 14:42:33.593719 1039263 system_pods.go:61] "kube-apiserver-embed-certs-668123" [a00570e4-b496-4083-b280-4125643e475e] Running
	I0729 14:42:33.593723 1039263 system_pods.go:61] "kube-controller-manager-embed-certs-668123" [cec685e1-4d5f-4210-a115-e3766c962f07] Running
	I0729 14:42:33.593725 1039263 system_pods.go:61] "kube-proxy-2v79q" [e43e850d-b94e-467c-bf0f-0eac3828f54f] Running
	I0729 14:42:33.593728 1039263 system_pods.go:61] "kube-scheduler-embed-certs-668123" [4037d948-faed-49c9-b321-6a4be51b9ea9] Running
	I0729 14:42:33.593733 1039263 system_pods.go:61] "metrics-server-569cc877fc-5msnp" [eb9cd6f7-caf5-4b18-b0d6-0f01add839ce] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:42:33.593736 1039263 system_pods.go:61] "storage-provisioner" [ecdab0df-406c-4f3c-b8fe-34a48b7f1e0a] Running
	I0729 14:42:33.593744 1039263 system_pods.go:74] duration metric: took 3.896394577s to wait for pod list to return data ...
	I0729 14:42:33.593751 1039263 default_sa.go:34] waiting for default service account to be created ...
	I0729 14:42:33.596176 1039263 default_sa.go:45] found service account: "default"
	I0729 14:42:33.596197 1039263 default_sa.go:55] duration metric: took 2.440561ms for default service account to be created ...
	I0729 14:42:33.596205 1039263 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 14:42:33.601830 1039263 system_pods.go:86] 8 kube-system pods found
	I0729 14:42:33.601855 1039263 system_pods.go:89] "coredns-7db6d8ff4d-6dhzz" [c680e565-fe93-4072-8fe8-6fd440ae5675] Running
	I0729 14:42:33.601861 1039263 system_pods.go:89] "etcd-embed-certs-668123" [3244d6a8-3aa2-406a-86fe-9770f5b8541a] Running
	I0729 14:42:33.601866 1039263 system_pods.go:89] "kube-apiserver-embed-certs-668123" [a00570e4-b496-4083-b280-4125643e475e] Running
	I0729 14:42:33.601871 1039263 system_pods.go:89] "kube-controller-manager-embed-certs-668123" [cec685e1-4d5f-4210-a115-e3766c962f07] Running
	I0729 14:42:33.601878 1039263 system_pods.go:89] "kube-proxy-2v79q" [e43e850d-b94e-467c-bf0f-0eac3828f54f] Running
	I0729 14:42:33.601887 1039263 system_pods.go:89] "kube-scheduler-embed-certs-668123" [4037d948-faed-49c9-b321-6a4be51b9ea9] Running
	I0729 14:42:33.601897 1039263 system_pods.go:89] "metrics-server-569cc877fc-5msnp" [eb9cd6f7-caf5-4b18-b0d6-0f01add839ce] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:42:33.601908 1039263 system_pods.go:89] "storage-provisioner" [ecdab0df-406c-4f3c-b8fe-34a48b7f1e0a] Running
	I0729 14:42:33.601921 1039263 system_pods.go:126] duration metric: took 5.70985ms to wait for k8s-apps to be running ...
	I0729 14:42:33.601934 1039263 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 14:42:33.601994 1039263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:42:33.620869 1039263 system_svc.go:56] duration metric: took 18.921974ms WaitForService to wait for kubelet
	I0729 14:42:33.620907 1039263 kubeadm.go:582] duration metric: took 4m24.7894747s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:42:33.620939 1039263 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:42:33.623517 1039263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:42:33.623538 1039263 node_conditions.go:123] node cpu capacity is 2
	I0729 14:42:33.623562 1039263 node_conditions.go:105] duration metric: took 2.617272ms to run NodePressure ...
	I0729 14:42:33.623582 1039263 start.go:241] waiting for startup goroutines ...
	I0729 14:42:33.623591 1039263 start.go:246] waiting for cluster config update ...
	I0729 14:42:33.623601 1039263 start.go:255] writing updated cluster config ...
	I0729 14:42:33.623897 1039263 ssh_runner.go:195] Run: rm -f paused
	I0729 14:42:33.677961 1039263 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 14:42:33.679952 1039263 out.go:177] * Done! kubectl is now configured to use "embed-certs-668123" cluster and "default" namespace by default
	I0729 14:42:30.758335 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:30.771788 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:30.771860 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:30.807608 1039759 cri.go:89] found id: ""
	I0729 14:42:30.807633 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.807641 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:30.807647 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:30.807709 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:30.842361 1039759 cri.go:89] found id: ""
	I0729 14:42:30.842389 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.842397 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:30.842404 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:30.842474 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:30.879123 1039759 cri.go:89] found id: ""
	I0729 14:42:30.879149 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.879157 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:30.879162 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:30.879228 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:30.913042 1039759 cri.go:89] found id: ""
	I0729 14:42:30.913072 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.913084 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:30.913092 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:30.913162 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:30.949867 1039759 cri.go:89] found id: ""
	I0729 14:42:30.949900 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.949910 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:30.949919 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:30.949988 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:30.997468 1039759 cri.go:89] found id: ""
	I0729 14:42:30.997497 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.997509 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:30.997516 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:30.997606 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:31.039611 1039759 cri.go:89] found id: ""
	I0729 14:42:31.039643 1039759 logs.go:276] 0 containers: []
	W0729 14:42:31.039654 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:31.039662 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:31.039730 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:31.085802 1039759 cri.go:89] found id: ""
	I0729 14:42:31.085839 1039759 logs.go:276] 0 containers: []
	W0729 14:42:31.085851 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:31.085862 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:31.085890 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:31.155919 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:31.155941 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:31.155954 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:31.232795 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:31.232833 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:31.270647 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:31.270682 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:31.324648 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:31.324685 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:33.839801 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:33.853358 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:33.853417 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:33.674345 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:36.174468 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:34.707752 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:37.209918 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:33.889294 1039759 cri.go:89] found id: ""
	I0729 14:42:33.889323 1039759 logs.go:276] 0 containers: []
	W0729 14:42:33.889334 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:33.889342 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:33.889413 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:33.930106 1039759 cri.go:89] found id: ""
	I0729 14:42:33.930130 1039759 logs.go:276] 0 containers: []
	W0729 14:42:33.930142 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:33.930149 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:33.930211 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:33.973607 1039759 cri.go:89] found id: ""
	I0729 14:42:33.973634 1039759 logs.go:276] 0 containers: []
	W0729 14:42:33.973646 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:33.973654 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:33.973715 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:34.010103 1039759 cri.go:89] found id: ""
	I0729 14:42:34.010133 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.010142 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:34.010149 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:34.010209 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:34.044050 1039759 cri.go:89] found id: ""
	I0729 14:42:34.044080 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.044092 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:34.044099 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:34.044174 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:34.081222 1039759 cri.go:89] found id: ""
	I0729 14:42:34.081250 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.081260 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:34.081268 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:34.081360 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:34.115837 1039759 cri.go:89] found id: ""
	I0729 14:42:34.115878 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.115891 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:34.115899 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:34.115973 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:34.151086 1039759 cri.go:89] found id: ""
	I0729 14:42:34.151116 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.151126 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:34.151139 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:34.151156 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:34.164058 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:34.164087 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:34.238481 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:34.238503 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:34.238518 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:34.316236 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:34.316279 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:34.356281 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:34.356316 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:36.910374 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:36.924907 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:36.925008 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:36.960508 1039759 cri.go:89] found id: ""
	I0729 14:42:36.960535 1039759 logs.go:276] 0 containers: []
	W0729 14:42:36.960543 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:36.960550 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:36.960631 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:36.999840 1039759 cri.go:89] found id: ""
	I0729 14:42:36.999869 1039759 logs.go:276] 0 containers: []
	W0729 14:42:36.999881 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:36.999889 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:36.999960 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:37.032801 1039759 cri.go:89] found id: ""
	I0729 14:42:37.032832 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.032840 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:37.032847 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:37.032907 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:37.066359 1039759 cri.go:89] found id: ""
	I0729 14:42:37.066386 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.066394 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:37.066401 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:37.066454 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:37.103816 1039759 cri.go:89] found id: ""
	I0729 14:42:37.103844 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.103852 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:37.103859 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:37.103922 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:37.137135 1039759 cri.go:89] found id: ""
	I0729 14:42:37.137175 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.137186 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:37.137194 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:37.137267 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:37.170819 1039759 cri.go:89] found id: ""
	I0729 14:42:37.170851 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.170863 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:37.170871 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:37.170941 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:37.206427 1039759 cri.go:89] found id: ""
	I0729 14:42:37.206456 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.206467 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:37.206478 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:37.206492 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:37.287119 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:37.287160 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:37.331090 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:37.331119 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:37.392147 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:37.392189 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:37.406017 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:37.406047 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:37.471644 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:38.673603 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:40.674214 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:39.706915 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:41.201453 1039440 pod_ready.go:81] duration metric: took 4m0.000454399s for pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace to be "Ready" ...
	E0729 14:42:41.201488 1039440 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 14:42:41.201514 1039440 pod_ready.go:38] duration metric: took 4m13.052610312s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:42:41.201553 1039440 kubeadm.go:597] duration metric: took 4m22.712976139s to restartPrimaryControlPlane
	W0729 14:42:41.201639 1039440 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 14:42:41.201696 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:42:39.972835 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:39.985878 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:39.985945 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:40.020312 1039759 cri.go:89] found id: ""
	I0729 14:42:40.020349 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.020360 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:40.020368 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:40.020456 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:40.055688 1039759 cri.go:89] found id: ""
	I0729 14:42:40.055721 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.055732 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:40.055740 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:40.055799 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:40.090432 1039759 cri.go:89] found id: ""
	I0729 14:42:40.090463 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.090472 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:40.090478 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:40.090549 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:40.127794 1039759 cri.go:89] found id: ""
	I0729 14:42:40.127823 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.127832 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:40.127838 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:40.127894 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:40.162911 1039759 cri.go:89] found id: ""
	I0729 14:42:40.162944 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.162953 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:40.162959 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:40.163020 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:40.201578 1039759 cri.go:89] found id: ""
	I0729 14:42:40.201608 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.201619 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:40.201625 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:40.201684 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:40.247314 1039759 cri.go:89] found id: ""
	I0729 14:42:40.247340 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.247348 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:40.247363 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:40.247436 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:40.285393 1039759 cri.go:89] found id: ""
	I0729 14:42:40.285422 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.285431 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:40.285440 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:40.285458 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:40.299901 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:40.299933 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:40.372774 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:40.372802 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:40.372821 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:40.454392 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:40.454447 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:40.494641 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:40.494671 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:43.046060 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:43.058790 1039759 kubeadm.go:597] duration metric: took 4m3.37086398s to restartPrimaryControlPlane
	W0729 14:42:43.058888 1039759 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 14:42:43.058920 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:42:43.544647 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:42:43.560304 1039759 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:42:43.570229 1039759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:42:43.579922 1039759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:42:43.579946 1039759 kubeadm.go:157] found existing configuration files:
	
	I0729 14:42:43.580004 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:42:43.589520 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:42:43.589591 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:42:43.600286 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:42:43.611565 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:42:43.611629 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:42:43.623432 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:42:43.633289 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:42:43.633338 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:42:43.643410 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:42:43.653723 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:42:43.653816 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:42:43.663840 1039759 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:42:43.735243 1039759 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 14:42:43.735314 1039759 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:42:43.904148 1039759 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:42:43.904310 1039759 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:42:43.904480 1039759 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 14:42:44.101401 1039759 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:42:44.103392 1039759 out.go:204]   - Generating certificates and keys ...
	I0729 14:42:44.103499 1039759 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:42:44.103580 1039759 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:42:44.103693 1039759 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:42:44.103829 1039759 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:42:44.103944 1039759 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:42:44.104054 1039759 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:42:44.104146 1039759 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:42:44.104360 1039759 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:42:44.104599 1039759 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:42:44.105264 1039759 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:42:44.105363 1039759 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:42:44.105461 1039759 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:42:44.426107 1039759 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:42:44.593004 1039759 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:42:44.845387 1039759 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:42:44.934634 1039759 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:42:44.959808 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:42:44.961918 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:42:44.961990 1039759 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:42:45.117986 1039759 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:42:42.678218 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:45.175453 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:45.119775 1039759 out.go:204]   - Booting up control plane ...
	I0729 14:42:45.119913 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:42:45.121333 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:42:45.123001 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:42:45.123783 1039759 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:42:45.126031 1039759 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 14:42:47.673678 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:49.674212 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:52.173086 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:54.173797 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:56.178948 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:58.674432 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:00.675207 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:03.173621 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:05.175460 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:07.674421 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:09.674478 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:12.882329 1039440 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.680602745s)
	I0729 14:43:12.882426 1039440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:43:12.900267 1039440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:43:12.910750 1039440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:43:12.921172 1039440 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:43:12.921194 1039440 kubeadm.go:157] found existing configuration files:
	
	I0729 14:43:12.921244 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 14:43:12.931186 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:43:12.931243 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:43:12.940800 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 14:43:12.949875 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:43:12.949929 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:43:12.959555 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 14:43:12.968817 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:43:12.968871 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:43:12.978560 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 14:43:12.987657 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:43:12.987700 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:43:12.997142 1039440 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:43:13.057245 1039440 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 14:43:13.057405 1039440 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:43:13.205227 1039440 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:43:13.205381 1039440 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:43:13.205541 1039440 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 14:43:13.404885 1039440 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:43:13.407054 1039440 out.go:204]   - Generating certificates and keys ...
	I0729 14:43:13.407148 1039440 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:43:13.407232 1039440 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:43:13.407329 1039440 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:43:13.407411 1039440 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:43:13.407509 1039440 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:43:13.407598 1039440 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:43:13.407688 1039440 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:43:13.407774 1039440 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:43:13.407889 1039440 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:43:13.408006 1039440 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:43:13.408071 1039440 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:43:13.408177 1039440 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:43:13.563569 1039440 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:43:14.001138 1039440 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 14:43:14.091368 1039440 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:43:14.238732 1039440 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:43:14.344460 1039440 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:43:14.346386 1039440 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:43:14.349309 1039440 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:43:12.174022 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:14.673166 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:14.351183 1039440 out.go:204]   - Booting up control plane ...
	I0729 14:43:14.351293 1039440 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:43:14.351374 1039440 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:43:14.351671 1039440 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:43:14.375878 1039440 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:43:14.377114 1039440 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:43:14.377198 1039440 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:43:14.528561 1039440 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 14:43:14.528665 1039440 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 14:43:15.030447 1039440 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.044001ms
	I0729 14:43:15.030591 1039440 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 14:43:20.033357 1039440 kubeadm.go:310] [api-check] The API server is healthy after 5.002708747s
	I0729 14:43:20.055871 1039440 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 14:43:20.069020 1039440 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 14:43:20.108465 1039440 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 14:43:20.108664 1039440 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-751306 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 14:43:20.124596 1039440 kubeadm.go:310] [bootstrap-token] Using token: vqqt7g.hayxn6bly3sjo08s
	I0729 14:43:20.125995 1039440 out.go:204]   - Configuring RBAC rules ...
	I0729 14:43:20.126124 1039440 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 14:43:20.138826 1039440 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 14:43:20.145976 1039440 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 14:43:20.149166 1039440 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 14:43:20.152875 1039440 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 14:43:20.156268 1039440 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 14:43:20.446117 1039440 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 14:43:20.900251 1039440 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 14:43:21.446105 1039440 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 14:43:21.446920 1039440 kubeadm.go:310] 
	I0729 14:43:21.446984 1039440 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 14:43:21.446992 1039440 kubeadm.go:310] 
	I0729 14:43:21.447057 1039440 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 14:43:21.447063 1039440 kubeadm.go:310] 
	I0729 14:43:21.447084 1039440 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 14:43:21.447133 1039440 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 14:43:21.447176 1039440 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 14:43:21.447182 1039440 kubeadm.go:310] 
	I0729 14:43:21.447233 1039440 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 14:43:21.447242 1039440 kubeadm.go:310] 
	I0729 14:43:21.447310 1039440 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 14:43:21.447334 1039440 kubeadm.go:310] 
	I0729 14:43:21.447408 1039440 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 14:43:21.447515 1039440 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 14:43:21.447574 1039440 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 14:43:21.447582 1039440 kubeadm.go:310] 
	I0729 14:43:21.447652 1039440 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 14:43:21.447722 1039440 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 14:43:21.447728 1039440 kubeadm.go:310] 
	I0729 14:43:21.447799 1039440 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token vqqt7g.hayxn6bly3sjo08s \
	I0729 14:43:21.447903 1039440 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 \
	I0729 14:43:21.447931 1039440 kubeadm.go:310] 	--control-plane 
	I0729 14:43:21.447935 1039440 kubeadm.go:310] 
	I0729 14:43:21.448017 1039440 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 14:43:21.448025 1039440 kubeadm.go:310] 
	I0729 14:43:21.448115 1039440 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token vqqt7g.hayxn6bly3sjo08s \
	I0729 14:43:21.448239 1039440 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 
	I0729 14:43:21.449071 1039440 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:43:21.449117 1039440 cni.go:84] Creating CNI manager for ""
	I0729 14:43:21.449134 1039440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:43:21.450744 1039440 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:43:16.674887 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:19.175478 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:21.452012 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:43:21.464232 1039440 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:43:21.486786 1039440 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 14:43:21.486890 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:21.486887 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-751306 minikube.k8s.io/updated_at=2024_07_29T14_43_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411 minikube.k8s.io/name=default-k8s-diff-port-751306 minikube.k8s.io/primary=true
	I0729 14:43:21.689413 1039440 ops.go:34] apiserver oom_adj: -16
	I0729 14:43:21.697342 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:22.198351 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:21.673361 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:23.674189 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:26.173782 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:22.698043 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:23.198259 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:23.697640 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:24.198325 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:24.697702 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:25.198216 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:25.697625 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:26.197978 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:26.698039 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:27.197794 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:25.126835 1039759 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 14:43:25.127033 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:43:25.127306 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:43:28.174036 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:29.667306 1038758 pod_ready.go:81] duration metric: took 4m0.000473541s for pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace to be "Ready" ...
	E0729 14:43:29.667341 1038758 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 14:43:29.667369 1038758 pod_ready.go:38] duration metric: took 4m13.916299366s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:43:29.667407 1038758 kubeadm.go:597] duration metric: took 4m21.57875039s to restartPrimaryControlPlane
	W0729 14:43:29.667481 1038758 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 14:43:29.667513 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:43:27.698036 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:28.197941 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:28.697839 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:29.197525 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:29.698141 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:30.197670 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:30.697615 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:31.197999 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:31.697648 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:32.197647 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:30.127504 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:43:30.127777 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:43:32.697837 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:33.197692 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:33.697431 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:34.198048 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:34.698439 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:34.802320 1039440 kubeadm.go:1113] duration metric: took 13.31552277s to wait for elevateKubeSystemPrivileges
	I0729 14:43:34.802367 1039440 kubeadm.go:394] duration metric: took 5m16.369033556s to StartCluster
	I0729 14:43:34.802391 1039440 settings.go:142] acquiring lock: {Name:mke61e73d7bb1a5bd9c2f4c9e9bba0a07b199ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:43:34.802488 1039440 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:43:34.804740 1039440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:43:34.805049 1039440 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.233 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:43:34.805148 1039440 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 14:43:34.805251 1039440 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-751306"
	I0729 14:43:34.805262 1039440 config.go:182] Loaded profile config "default-k8s-diff-port-751306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:43:34.805269 1039440 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-751306"
	I0729 14:43:34.805313 1039440 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-751306"
	I0729 14:43:34.805294 1039440 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-751306"
	W0729 14:43:34.805341 1039440 addons.go:243] addon storage-provisioner should already be in state true
	I0729 14:43:34.805358 1039440 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-751306"
	W0729 14:43:34.805369 1039440 addons.go:243] addon metrics-server should already be in state true
	I0729 14:43:34.805396 1039440 host.go:66] Checking if "default-k8s-diff-port-751306" exists ...
	I0729 14:43:34.805325 1039440 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-751306"
	I0729 14:43:34.805396 1039440 host.go:66] Checking if "default-k8s-diff-port-751306" exists ...
	I0729 14:43:34.805838 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.805869 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.805904 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.805928 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.805968 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.806026 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.806625 1039440 out.go:177] * Verifying Kubernetes components...
	I0729 14:43:34.807999 1039440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:43:34.823091 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39133
	I0729 14:43:34.823103 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35809
	I0729 14:43:34.823532 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.823556 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.824084 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.824111 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.824372 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.824399 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.824427 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.824891 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.825049 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38325
	I0729 14:43:34.825140 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.825191 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.825210 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:43:34.825415 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.825927 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.825945 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.826314 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.826903 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.826939 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.829361 1039440 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-751306"
	W0729 14:43:34.829386 1039440 addons.go:243] addon default-storageclass should already be in state true
	I0729 14:43:34.829417 1039440 host.go:66] Checking if "default-k8s-diff-port-751306" exists ...
	I0729 14:43:34.829785 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.829832 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.841752 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44091
	I0729 14:43:34.842232 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.842938 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.842965 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.843370 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38151
	I0729 14:43:34.843397 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.843713 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:43:34.843818 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.844223 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.844247 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.844615 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.844805 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:43:34.846424 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:43:34.846619 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:43:34.848531 1039440 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 14:43:34.848918 1039440 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:43:34.849006 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35785
	I0729 14:43:34.849421 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.849852 1039440 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 14:43:34.849870 1039440 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 14:43:34.849888 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:43:34.850037 1039440 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:43:34.850053 1039440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 14:43:34.850069 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:43:34.850233 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.850251 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.850659 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.851665 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.851781 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.853937 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.854441 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.854518 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:43:34.854540 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.854589 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:43:34.854779 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:43:34.855035 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:43:34.855098 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:43:34.855114 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.855169 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:43:34.855465 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:43:34.855658 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:43:34.855828 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:43:34.856191 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:43:34.869648 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38917
	I0729 14:43:34.870131 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.870600 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.870618 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.871134 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.871334 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:43:34.873088 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:43:34.873340 1039440 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 14:43:34.873353 1039440 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 14:43:34.873369 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:43:34.876289 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.876751 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:43:34.876765 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.876952 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:43:34.877132 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:43:34.877267 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:43:34.877375 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:43:35.022897 1039440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:43:35.044537 1039440 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-751306" to be "Ready" ...
	I0729 14:43:35.057697 1039440 node_ready.go:49] node "default-k8s-diff-port-751306" has status "Ready":"True"
	I0729 14:43:35.057729 1039440 node_ready.go:38] duration metric: took 13.149458ms for node "default-k8s-diff-port-751306" to be "Ready" ...
	I0729 14:43:35.057744 1039440 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:43:35.073050 1039440 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7qhqh" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:35.150661 1039440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 14:43:35.170721 1039440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:43:35.228871 1039440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 14:43:35.228903 1039440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 14:43:35.276845 1039440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 14:43:35.276880 1039440 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 14:43:35.335623 1039440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:43:35.335656 1039440 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 14:43:35.407804 1039440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:43:35.446540 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.446567 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.446927 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:35.446959 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.446972 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:35.446985 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.446991 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.447286 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.447307 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:35.454199 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.454216 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.454476 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.454495 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:35.824592 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.824615 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.825058 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:35.825441 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.825525 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:35.825567 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.825576 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.827444 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.827454 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:35.827465 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:36.331175 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:36.331202 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:36.331575 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:36.331597 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:36.331607 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:36.331616 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:36.331623 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:36.331923 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:36.331961 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:36.331986 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:36.332003 1039440 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-751306"
	I0729 14:43:36.333995 1039440 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 14:43:36.335441 1039440 addons.go:510] duration metric: took 1.53029708s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 14:43:37.081992 1039440 pod_ready.go:92] pod "coredns-7db6d8ff4d-7qhqh" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.082019 1039440 pod_ready.go:81] duration metric: took 2.008931409s for pod "coredns-7db6d8ff4d-7qhqh" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.082031 1039440 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zxmwx" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.086173 1039440 pod_ready.go:92] pod "coredns-7db6d8ff4d-zxmwx" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.086194 1039440 pod_ready.go:81] duration metric: took 4.154163ms for pod "coredns-7db6d8ff4d-zxmwx" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.086203 1039440 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.090617 1039440 pod_ready.go:92] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.090636 1039440 pod_ready.go:81] duration metric: took 4.42625ms for pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.090647 1039440 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.094929 1039440 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.094950 1039440 pod_ready.go:81] duration metric: took 4.296245ms for pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.094962 1039440 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.099462 1039440 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.099483 1039440 pod_ready.go:81] duration metric: took 4.513354ms for pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.099495 1039440 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tqtjx" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.478252 1039440 pod_ready.go:92] pod "kube-proxy-tqtjx" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.478281 1039440 pod_ready.go:81] duration metric: took 378.778206ms for pod "kube-proxy-tqtjx" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.478295 1039440 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.878655 1039440 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.878678 1039440 pod_ready.go:81] duration metric: took 400.374407ms for pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.878686 1039440 pod_ready.go:38] duration metric: took 2.820929833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:43:37.878702 1039440 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:43:37.878752 1039440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:43:37.894699 1039440 api_server.go:72] duration metric: took 3.08960429s to wait for apiserver process to appear ...
	I0729 14:43:37.894730 1039440 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:43:37.894767 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:43:37.899710 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 200:
	ok
	I0729 14:43:37.900733 1039440 api_server.go:141] control plane version: v1.30.3
	I0729 14:43:37.900757 1039440 api_server.go:131] duration metric: took 6.019707ms to wait for apiserver health ...
	I0729 14:43:37.900765 1039440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:43:38.083157 1039440 system_pods.go:59] 9 kube-system pods found
	I0729 14:43:38.083197 1039440 system_pods.go:61] "coredns-7db6d8ff4d-7qhqh" [88941d43-c67d-4190-896c-edfc4c96b9a8] Running
	I0729 14:43:38.083204 1039440 system_pods.go:61] "coredns-7db6d8ff4d-zxmwx" [13b78c9b-97dc-4313-92d1-76fab481b276] Running
	I0729 14:43:38.083210 1039440 system_pods.go:61] "etcd-default-k8s-diff-port-751306" [11d5216e-a3e3-4ac8-9b00-1b1b04bb1c3e] Running
	I0729 14:43:38.083215 1039440 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-751306" [f9f539b1-374e-4214-b4ac-d6bcb60ca022] Running
	I0729 14:43:38.083221 1039440 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-751306" [07af9a19-2d14-4727-b7b0-ad2f297c1d1a] Running
	I0729 14:43:38.083226 1039440 system_pods.go:61] "kube-proxy-tqtjx" [bd100e13-d714-4ddb-ba43-44be43035b3f] Running
	I0729 14:43:38.083231 1039440 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-751306" [03603694-d75d-4073-8ce9-0ed9bbbe150a] Running
	I0729 14:43:38.083240 1039440 system_pods.go:61] "metrics-server-569cc877fc-z9wg5" [f022dfec-8e97-4679-a7dd-739c9231af82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:43:38.083246 1039440 system_pods.go:61] "storage-provisioner" [a8bf282a-27e8-43f9-a2ac-af6000a4decc] Running
	I0729 14:43:38.083255 1039440 system_pods.go:74] duration metric: took 182.484884ms to wait for pod list to return data ...
	I0729 14:43:38.083269 1039440 default_sa.go:34] waiting for default service account to be created ...
	I0729 14:43:38.277387 1039440 default_sa.go:45] found service account: "default"
	I0729 14:43:38.277418 1039440 default_sa.go:55] duration metric: took 194.142035ms for default service account to be created ...
	I0729 14:43:38.277429 1039440 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 14:43:38.481158 1039440 system_pods.go:86] 9 kube-system pods found
	I0729 14:43:38.481194 1039440 system_pods.go:89] "coredns-7db6d8ff4d-7qhqh" [88941d43-c67d-4190-896c-edfc4c96b9a8] Running
	I0729 14:43:38.481202 1039440 system_pods.go:89] "coredns-7db6d8ff4d-zxmwx" [13b78c9b-97dc-4313-92d1-76fab481b276] Running
	I0729 14:43:38.481210 1039440 system_pods.go:89] "etcd-default-k8s-diff-port-751306" [11d5216e-a3e3-4ac8-9b00-1b1b04bb1c3e] Running
	I0729 14:43:38.481217 1039440 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-751306" [f9f539b1-374e-4214-b4ac-d6bcb60ca022] Running
	I0729 14:43:38.481225 1039440 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-751306" [07af9a19-2d14-4727-b7b0-ad2f297c1d1a] Running
	I0729 14:43:38.481230 1039440 system_pods.go:89] "kube-proxy-tqtjx" [bd100e13-d714-4ddb-ba43-44be43035b3f] Running
	I0729 14:43:38.481236 1039440 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-751306" [03603694-d75d-4073-8ce9-0ed9bbbe150a] Running
	I0729 14:43:38.481248 1039440 system_pods.go:89] "metrics-server-569cc877fc-z9wg5" [f022dfec-8e97-4679-a7dd-739c9231af82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:43:38.481255 1039440 system_pods.go:89] "storage-provisioner" [a8bf282a-27e8-43f9-a2ac-af6000a4decc] Running
	I0729 14:43:38.481267 1039440 system_pods.go:126] duration metric: took 203.830126ms to wait for k8s-apps to be running ...
	I0729 14:43:38.481280 1039440 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 14:43:38.481329 1039440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:43:38.496175 1039440 system_svc.go:56] duration metric: took 14.88714ms WaitForService to wait for kubelet
	I0729 14:43:38.496209 1039440 kubeadm.go:582] duration metric: took 3.691120463s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:43:38.496237 1039440 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:43:38.677820 1039440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:43:38.677847 1039440 node_conditions.go:123] node cpu capacity is 2
	I0729 14:43:38.677859 1039440 node_conditions.go:105] duration metric: took 181.616437ms to run NodePressure ...
	I0729 14:43:38.677874 1039440 start.go:241] waiting for startup goroutines ...
	I0729 14:43:38.677882 1039440 start.go:246] waiting for cluster config update ...
	I0729 14:43:38.677894 1039440 start.go:255] writing updated cluster config ...
	I0729 14:43:38.678166 1039440 ssh_runner.go:195] Run: rm -f paused
	I0729 14:43:38.728616 1039440 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 14:43:38.730494 1039440 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-751306" cluster and "default" namespace by default
	I0729 14:43:40.128244 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:43:40.128447 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:43:55.945251 1038758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.277690166s)
	I0729 14:43:55.945335 1038758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:43:55.960870 1038758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:43:55.971175 1038758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:43:55.981424 1038758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:43:55.981456 1038758 kubeadm.go:157] found existing configuration files:
	
	I0729 14:43:55.981512 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:43:55.992098 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:43:55.992165 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:43:56.002242 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:43:56.011416 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:43:56.011486 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:43:56.020848 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:43:56.030219 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:43:56.030280 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:43:56.039957 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:43:56.049607 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:43:56.049670 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:43:56.059413 1038758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:43:56.109453 1038758 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 14:43:56.109563 1038758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:43:56.230876 1038758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:43:56.231018 1038758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:43:56.231126 1038758 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 14:43:56.244355 1038758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:43:56.246461 1038758 out.go:204]   - Generating certificates and keys ...
	I0729 14:43:56.246573 1038758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:43:56.246666 1038758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:43:56.246755 1038758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:43:56.246843 1038758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:43:56.246964 1038758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:43:56.247169 1038758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:43:56.247267 1038758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:43:56.247365 1038758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:43:56.247473 1038758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:43:56.247588 1038758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:43:56.247646 1038758 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:43:56.247718 1038758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:43:56.593641 1038758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:43:56.714510 1038758 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 14:43:56.862780 1038758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:43:57.010367 1038758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:43:57.108324 1038758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:43:57.109028 1038758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:43:57.111425 1038758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:43:57.113088 1038758 out.go:204]   - Booting up control plane ...
	I0729 14:43:57.113217 1038758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:43:57.113336 1038758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:43:57.113501 1038758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:43:57.135168 1038758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:43:57.141915 1038758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:43:57.142022 1038758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:43:57.269947 1038758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 14:43:57.270056 1038758 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 14:43:57.772110 1038758 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.03343ms
	I0729 14:43:57.772229 1038758 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 14:44:02.773898 1038758 kubeadm.go:310] [api-check] The API server is healthy after 5.00168383s
	I0729 14:44:02.788629 1038758 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 14:44:02.805813 1038758 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 14:44:02.831687 1038758 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 14:44:02.831963 1038758 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-603534 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 14:44:02.842427 1038758 kubeadm.go:310] [bootstrap-token] Using token: hg3j3v.551bb9ju0g9ic9e6
	I0729 14:44:00.129004 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:44:00.129267 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:44:02.844018 1038758 out.go:204]   - Configuring RBAC rules ...
	I0729 14:44:02.844160 1038758 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 14:44:02.851693 1038758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 14:44:02.859496 1038758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 14:44:02.863556 1038758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 14:44:02.866896 1038758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 14:44:02.871375 1038758 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 14:44:03.181687 1038758 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 14:44:03.618445 1038758 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 14:44:04.184562 1038758 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 14:44:04.185548 1038758 kubeadm.go:310] 
	I0729 14:44:04.185655 1038758 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 14:44:04.185689 1038758 kubeadm.go:310] 
	I0729 14:44:04.185788 1038758 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 14:44:04.185801 1038758 kubeadm.go:310] 
	I0729 14:44:04.185825 1038758 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 14:44:04.185906 1038758 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 14:44:04.185983 1038758 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 14:44:04.185992 1038758 kubeadm.go:310] 
	I0729 14:44:04.186079 1038758 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 14:44:04.186090 1038758 kubeadm.go:310] 
	I0729 14:44:04.186155 1038758 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 14:44:04.186165 1038758 kubeadm.go:310] 
	I0729 14:44:04.186231 1038758 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 14:44:04.186337 1038758 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 14:44:04.186431 1038758 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 14:44:04.186441 1038758 kubeadm.go:310] 
	I0729 14:44:04.186575 1038758 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 14:44:04.186679 1038758 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 14:44:04.186689 1038758 kubeadm.go:310] 
	I0729 14:44:04.186810 1038758 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hg3j3v.551bb9ju0g9ic9e6 \
	I0729 14:44:04.186944 1038758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 \
	I0729 14:44:04.186974 1038758 kubeadm.go:310] 	--control-plane 
	I0729 14:44:04.186984 1038758 kubeadm.go:310] 
	I0729 14:44:04.187102 1038758 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 14:44:04.187111 1038758 kubeadm.go:310] 
	I0729 14:44:04.187224 1038758 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hg3j3v.551bb9ju0g9ic9e6 \
	I0729 14:44:04.187375 1038758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 
	I0729 14:44:04.188377 1038758 kubeadm.go:310] W0729 14:43:56.090027    2906 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 14:44:04.188711 1038758 kubeadm.go:310] W0729 14:43:56.090887    2906 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 14:44:04.188834 1038758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:44:04.188852 1038758 cni.go:84] Creating CNI manager for ""
	I0729 14:44:04.188863 1038758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:44:04.190535 1038758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:44:04.191948 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:44:04.203414 1038758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:44:04.223025 1038758 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 14:44:04.223114 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:04.223132 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-603534 minikube.k8s.io/updated_at=2024_07_29T14_44_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411 minikube.k8s.io/name=no-preload-603534 minikube.k8s.io/primary=true
	I0729 14:44:04.240353 1038758 ops.go:34] apiserver oom_adj: -16
	I0729 14:44:04.442077 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:04.942458 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:05.442843 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:05.942138 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:06.442232 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:06.942611 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:07.442939 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:07.942661 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:08.443044 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:08.522590 1038758 kubeadm.go:1113] duration metric: took 4.299548803s to wait for elevateKubeSystemPrivileges
	I0729 14:44:08.522633 1038758 kubeadm.go:394] duration metric: took 5m0.491164642s to StartCluster
	I0729 14:44:08.522657 1038758 settings.go:142] acquiring lock: {Name:mke61e73d7bb1a5bd9c2f4c9e9bba0a07b199ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:44:08.522755 1038758 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:44:08.524573 1038758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:44:08.524893 1038758 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:44:08.524999 1038758 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 14:44:08.525112 1038758 addons.go:69] Setting storage-provisioner=true in profile "no-preload-603534"
	I0729 14:44:08.525150 1038758 addons.go:234] Setting addon storage-provisioner=true in "no-preload-603534"
	I0729 14:44:08.525146 1038758 addons.go:69] Setting default-storageclass=true in profile "no-preload-603534"
	I0729 14:44:08.525155 1038758 config.go:182] Loaded profile config "no-preload-603534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 14:44:08.525167 1038758 addons.go:69] Setting metrics-server=true in profile "no-preload-603534"
	I0729 14:44:08.525182 1038758 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-603534"
	W0729 14:44:08.525162 1038758 addons.go:243] addon storage-provisioner should already be in state true
	I0729 14:44:08.525229 1038758 host.go:66] Checking if "no-preload-603534" exists ...
	I0729 14:44:08.525185 1038758 addons.go:234] Setting addon metrics-server=true in "no-preload-603534"
	W0729 14:44:08.525264 1038758 addons.go:243] addon metrics-server should already be in state true
	I0729 14:44:08.525294 1038758 host.go:66] Checking if "no-preload-603534" exists ...
	I0729 14:44:08.525510 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.525553 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.525652 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.525668 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.525688 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.525715 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.526581 1038758 out.go:177] * Verifying Kubernetes components...
	I0729 14:44:08.527919 1038758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:44:08.541874 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43521
	I0729 14:44:08.542126 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34697
	I0729 14:44:08.542251 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35699
	I0729 14:44:08.542397 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.542505 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.542664 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.542948 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.542969 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.543075 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.543090 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.543115 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.543127 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.543323 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.543546 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.543551 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.543758 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.543779 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.544014 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.544035 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.544149 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:44:08.548026 1038758 addons.go:234] Setting addon default-storageclass=true in "no-preload-603534"
	W0729 14:44:08.548048 1038758 addons.go:243] addon default-storageclass should already be in state true
	I0729 14:44:08.548079 1038758 host.go:66] Checking if "no-preload-603534" exists ...
	I0729 14:44:08.548457 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.548478 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.559699 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36211
	I0729 14:44:08.560297 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.560916 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.560953 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.561332 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.561519 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:44:08.563422 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:44:08.564073 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42429
	I0729 14:44:08.564524 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.565011 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.565038 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.565427 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.565592 1038758 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 14:44:08.565752 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:44:08.566901 1038758 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 14:44:08.566921 1038758 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 14:44:08.566941 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:44:08.567688 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:44:08.568067 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34485
	I0729 14:44:08.568443 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.569019 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.569040 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.569462 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.569583 1038758 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:44:08.570038 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.570074 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.571187 1038758 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:44:08.571204 1038758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 14:44:08.571223 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:44:08.571595 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.572203 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:44:08.572247 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.572506 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:44:08.572704 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:44:08.572893 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:44:08.573100 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:44:08.574551 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.574900 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:44:08.574919 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.575074 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:44:08.575286 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:44:08.575427 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:44:08.575551 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:44:08.585902 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40045
	I0729 14:44:08.586319 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.586778 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.586803 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.587135 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.587357 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:44:08.588606 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:44:08.588827 1038758 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 14:44:08.588844 1038758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 14:44:08.588861 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:44:08.591169 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.591434 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:44:08.591466 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.591600 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:44:08.591766 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:44:08.591873 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:44:08.592103 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:44:08.752015 1038758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:44:08.775498 1038758 node_ready.go:35] waiting up to 6m0s for node "no-preload-603534" to be "Ready" ...
	I0729 14:44:08.788547 1038758 node_ready.go:49] node "no-preload-603534" has status "Ready":"True"
	I0729 14:44:08.788572 1038758 node_ready.go:38] duration metric: took 13.040411ms for node "no-preload-603534" to be "Ready" ...
	I0729 14:44:08.788582 1038758 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:44:08.793475 1038758 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:08.861468 1038758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:44:08.869542 1038758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 14:44:08.869567 1038758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 14:44:08.898398 1038758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 14:44:08.911120 1038758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 14:44:08.911148 1038758 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 14:44:08.931151 1038758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:44:08.931179 1038758 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 14:44:08.976093 1038758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:44:09.449857 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.449885 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.449863 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.449958 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.450343 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.450354 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.450361 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.450373 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.450374 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.450389 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.450442 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.450455 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.450476 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.450487 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.450620 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.450635 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.450637 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.450779 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.450799 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.493934 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.493959 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.494303 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.494320 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.494342 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.706038 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.706072 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.706366 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.706382 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.706391 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.706398 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.707956 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.707958 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.707986 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.708015 1038758 addons.go:475] Verifying addon metrics-server=true in "no-preload-603534"
	I0729 14:44:09.709729 1038758 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 14:44:09.711283 1038758 addons.go:510] duration metric: took 1.186289164s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 14:44:10.807976 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"False"
	I0729 14:44:13.300325 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"False"
	I0729 14:44:15.800886 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"False"
	I0729 14:44:18.300042 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"False"
	I0729 14:44:18.800080 1038758 pod_ready.go:92] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.800111 1038758 pod_ready.go:81] duration metric: took 10.006613711s for pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.800124 1038758 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-vn8z4" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.804949 1038758 pod_ready.go:92] pod "coredns-5cfdc65f69-vn8z4" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.804974 1038758 pod_ready.go:81] duration metric: took 4.840477ms for pod "coredns-5cfdc65f69-vn8z4" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.804985 1038758 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.810160 1038758 pod_ready.go:92] pod "etcd-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.810176 1038758 pod_ready.go:81] duration metric: took 5.184516ms for pod "etcd-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.810185 1038758 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.814785 1038758 pod_ready.go:92] pod "kube-apiserver-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.814807 1038758 pod_ready.go:81] duration metric: took 4.615516ms for pod "kube-apiserver-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.814819 1038758 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.819023 1038758 pod_ready.go:92] pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.819044 1038758 pod_ready.go:81] duration metric: took 4.215656ms for pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.819056 1038758 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7mr4z" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:19.198226 1038758 pod_ready.go:92] pod "kube-proxy-7mr4z" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:19.198252 1038758 pod_ready.go:81] duration metric: took 379.18928ms for pod "kube-proxy-7mr4z" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:19.198265 1038758 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:19.598783 1038758 pod_ready.go:92] pod "kube-scheduler-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:19.598824 1038758 pod_ready.go:81] duration metric: took 400.55255ms for pod "kube-scheduler-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:19.598835 1038758 pod_ready.go:38] duration metric: took 10.810240266s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:44:19.598865 1038758 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:44:19.598931 1038758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:44:19.615165 1038758 api_server.go:72] duration metric: took 11.090236578s to wait for apiserver process to appear ...
	I0729 14:44:19.615190 1038758 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:44:19.615211 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:44:19.619574 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0729 14:44:19.620586 1038758 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 14:44:19.620610 1038758 api_server.go:131] duration metric: took 5.412598ms to wait for apiserver health ...
	I0729 14:44:19.620620 1038758 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:44:19.802376 1038758 system_pods.go:59] 9 kube-system pods found
	I0729 14:44:19.802408 1038758 system_pods.go:61] "coredns-5cfdc65f69-m6q8r" [b3a0c38d-1587-4fdf-b2e6-58d364ca400b] Running
	I0729 14:44:19.802415 1038758 system_pods.go:61] "coredns-5cfdc65f69-vn8z4" [4654aadf-7870-46b6-96e6-5948239fbe22] Running
	I0729 14:44:19.802420 1038758 system_pods.go:61] "etcd-no-preload-603534" [01737765-56ad-4305-aa98-d531dd1fadb4] Running
	I0729 14:44:19.802429 1038758 system_pods.go:61] "kube-apiserver-no-preload-603534" [141fffbe-df4b-4de1-9d78-f1acf0b837a6] Running
	I0729 14:44:19.802434 1038758 system_pods.go:61] "kube-controller-manager-no-preload-603534" [39c980ec-50f7-4af1-b931-1a446775c934] Running
	I0729 14:44:19.802441 1038758 system_pods.go:61] "kube-proxy-7mr4z" [17de173c-2b95-4b35-a9d7-b38f065270cb] Running
	I0729 14:44:19.802446 1038758 system_pods.go:61] "kube-scheduler-no-preload-603534" [8d896d6c-43b9-4bc8-9994-41b0bd4b636d] Running
	I0729 14:44:19.802454 1038758 system_pods.go:61] "metrics-server-78fcd8795b-852x6" [637fea9b-2924-4593-a4e2-99a33ab613d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:44:19.802470 1038758 system_pods.go:61] "storage-provisioner" [7336eb38-d53d-4456-8367-cf843abe5cb5] Running
	I0729 14:44:19.802482 1038758 system_pods.go:74] duration metric: took 181.853357ms to wait for pod list to return data ...
	I0729 14:44:19.802491 1038758 default_sa.go:34] waiting for default service account to be created ...
	I0729 14:44:19.998312 1038758 default_sa.go:45] found service account: "default"
	I0729 14:44:19.998348 1038758 default_sa.go:55] duration metric: took 195.845187ms for default service account to be created ...
	I0729 14:44:19.998361 1038758 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 14:44:20.201742 1038758 system_pods.go:86] 9 kube-system pods found
	I0729 14:44:20.201778 1038758 system_pods.go:89] "coredns-5cfdc65f69-m6q8r" [b3a0c38d-1587-4fdf-b2e6-58d364ca400b] Running
	I0729 14:44:20.201787 1038758 system_pods.go:89] "coredns-5cfdc65f69-vn8z4" [4654aadf-7870-46b6-96e6-5948239fbe22] Running
	I0729 14:44:20.201793 1038758 system_pods.go:89] "etcd-no-preload-603534" [01737765-56ad-4305-aa98-d531dd1fadb4] Running
	I0729 14:44:20.201800 1038758 system_pods.go:89] "kube-apiserver-no-preload-603534" [141fffbe-df4b-4de1-9d78-f1acf0b837a6] Running
	I0729 14:44:20.201807 1038758 system_pods.go:89] "kube-controller-manager-no-preload-603534" [39c980ec-50f7-4af1-b931-1a446775c934] Running
	I0729 14:44:20.201812 1038758 system_pods.go:89] "kube-proxy-7mr4z" [17de173c-2b95-4b35-a9d7-b38f065270cb] Running
	I0729 14:44:20.201818 1038758 system_pods.go:89] "kube-scheduler-no-preload-603534" [8d896d6c-43b9-4bc8-9994-41b0bd4b636d] Running
	I0729 14:44:20.201826 1038758 system_pods.go:89] "metrics-server-78fcd8795b-852x6" [637fea9b-2924-4593-a4e2-99a33ab613d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:44:20.201835 1038758 system_pods.go:89] "storage-provisioner" [7336eb38-d53d-4456-8367-cf843abe5cb5] Running
	I0729 14:44:20.201850 1038758 system_pods.go:126] duration metric: took 203.481528ms to wait for k8s-apps to be running ...
	I0729 14:44:20.201860 1038758 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 14:44:20.201914 1038758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:44:20.217416 1038758 system_svc.go:56] duration metric: took 15.543768ms WaitForService to wait for kubelet
	I0729 14:44:20.217445 1038758 kubeadm.go:582] duration metric: took 11.692521258s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:44:20.217464 1038758 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:44:20.398667 1038758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:44:20.398696 1038758 node_conditions.go:123] node cpu capacity is 2
	I0729 14:44:20.398708 1038758 node_conditions.go:105] duration metric: took 181.238886ms to run NodePressure ...
	I0729 14:44:20.398720 1038758 start.go:241] waiting for startup goroutines ...
	I0729 14:44:20.398727 1038758 start.go:246] waiting for cluster config update ...
	I0729 14:44:20.398738 1038758 start.go:255] writing updated cluster config ...
	I0729 14:44:20.399014 1038758 ssh_runner.go:195] Run: rm -f paused
	I0729 14:44:20.452187 1038758 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 14:44:20.454434 1038758 out.go:177] * Done! kubectl is now configured to use "no-preload-603534" cluster and "default" namespace by default
	I0729 14:44:40.130597 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:44:40.130831 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:44:40.130848 1039759 kubeadm.go:310] 
	I0729 14:44:40.130903 1039759 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 14:44:40.130956 1039759 kubeadm.go:310] 		timed out waiting for the condition
	I0729 14:44:40.130966 1039759 kubeadm.go:310] 
	I0729 14:44:40.131032 1039759 kubeadm.go:310] 	This error is likely caused by:
	I0729 14:44:40.131110 1039759 kubeadm.go:310] 		- The kubelet is not running
	I0729 14:44:40.131256 1039759 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 14:44:40.131270 1039759 kubeadm.go:310] 
	I0729 14:44:40.131450 1039759 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 14:44:40.131499 1039759 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 14:44:40.131542 1039759 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 14:44:40.131552 1039759 kubeadm.go:310] 
	I0729 14:44:40.131686 1039759 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 14:44:40.131795 1039759 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 14:44:40.131806 1039759 kubeadm.go:310] 
	I0729 14:44:40.131947 1039759 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 14:44:40.132064 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 14:44:40.132162 1039759 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 14:44:40.132254 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 14:44:40.132264 1039759 kubeadm.go:310] 
	I0729 14:44:40.133208 1039759 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:44:40.133363 1039759 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 14:44:40.133468 1039759 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 14:44:40.133610 1039759 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 14:44:40.133676 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:44:40.607039 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:44:40.623771 1039759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:44:40.636278 1039759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:44:40.636310 1039759 kubeadm.go:157] found existing configuration files:
	
	I0729 14:44:40.636371 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:44:40.647768 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:44:40.647827 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:44:40.658281 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:44:40.668393 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:44:40.668477 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:44:40.678521 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:44:40.687891 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:44:40.687960 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:44:40.698384 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:44:40.708965 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:44:40.709047 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:44:40.719665 1039759 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:44:40.796786 1039759 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 14:44:40.796883 1039759 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:44:40.946106 1039759 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:44:40.946258 1039759 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:44:40.946388 1039759 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 14:44:41.140483 1039759 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:44:41.142390 1039759 out.go:204]   - Generating certificates and keys ...
	I0729 14:44:41.142503 1039759 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:44:41.142610 1039759 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:44:41.142722 1039759 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:44:41.142811 1039759 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:44:41.142910 1039759 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:44:41.142995 1039759 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:44:41.143092 1039759 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:44:41.143180 1039759 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:44:41.143279 1039759 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:44:41.143390 1039759 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:44:41.143445 1039759 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:44:41.143524 1039759 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:44:41.188854 1039759 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:44:41.329957 1039759 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:44:41.968599 1039759 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:44:42.034788 1039759 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:44:42.055543 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:44:42.056622 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:44:42.056715 1039759 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:44:42.204165 1039759 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:44:42.205935 1039759 out.go:204]   - Booting up control plane ...
	I0729 14:44:42.206076 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:44:42.216259 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:44:42.217947 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:44:42.219361 1039759 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:44:42.221672 1039759 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 14:45:22.223830 1039759 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 14:45:22.223940 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:22.224139 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:45:27.224303 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:27.224574 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:45:37.224905 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:37.225090 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:45:57.226285 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:57.226533 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:46:37.227279 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:46:37.227485 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:46:37.227494 1039759 kubeadm.go:310] 
	I0729 14:46:37.227528 1039759 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 14:46:37.227605 1039759 kubeadm.go:310] 		timed out waiting for the condition
	I0729 14:46:37.227627 1039759 kubeadm.go:310] 
	I0729 14:46:37.227683 1039759 kubeadm.go:310] 	This error is likely caused by:
	I0729 14:46:37.227732 1039759 kubeadm.go:310] 		- The kubelet is not running
	I0729 14:46:37.227861 1039759 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 14:46:37.227870 1039759 kubeadm.go:310] 
	I0729 14:46:37.228011 1039759 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 14:46:37.228093 1039759 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 14:46:37.228140 1039759 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 14:46:37.228173 1039759 kubeadm.go:310] 
	I0729 14:46:37.228310 1039759 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 14:46:37.228443 1039759 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 14:46:37.228454 1039759 kubeadm.go:310] 
	I0729 14:46:37.228606 1039759 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 14:46:37.228714 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 14:46:37.228821 1039759 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 14:46:37.228913 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 14:46:37.228934 1039759 kubeadm.go:310] 
	I0729 14:46:37.229926 1039759 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:46:37.230070 1039759 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 14:46:37.230175 1039759 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 14:46:37.230284 1039759 kubeadm.go:394] duration metric: took 7m57.608522587s to StartCluster
	I0729 14:46:37.230347 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:46:37.230435 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:46:37.276238 1039759 cri.go:89] found id: ""
	I0729 14:46:37.276294 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.276304 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:46:37.276317 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:46:37.276439 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:46:37.309934 1039759 cri.go:89] found id: ""
	I0729 14:46:37.309960 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.309969 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:46:37.309975 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:46:37.310031 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:46:37.343286 1039759 cri.go:89] found id: ""
	I0729 14:46:37.343312 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.343320 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:46:37.343325 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:46:37.343384 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:46:37.378735 1039759 cri.go:89] found id: ""
	I0729 14:46:37.378763 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.378773 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:46:37.378779 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:46:37.378834 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:46:37.414244 1039759 cri.go:89] found id: ""
	I0729 14:46:37.414275 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.414284 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:46:37.414290 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:46:37.414372 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:46:37.453809 1039759 cri.go:89] found id: ""
	I0729 14:46:37.453842 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.453858 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:46:37.453866 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:46:37.453955 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:46:37.492250 1039759 cri.go:89] found id: ""
	I0729 14:46:37.492279 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.492288 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:46:37.492294 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:46:37.492360 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:46:37.554342 1039759 cri.go:89] found id: ""
	I0729 14:46:37.554377 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.554388 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:46:37.554404 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:46:37.554422 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:46:37.631118 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:46:37.631165 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:46:37.650991 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:46:37.651047 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:46:37.731852 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:46:37.731880 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:46:37.731897 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:46:37.849049 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:46:37.849092 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0729 14:46:37.893957 1039759 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 14:46:37.894031 1039759 out.go:239] * 
	W0729 14:46:37.894120 1039759 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 14:46:37.894150 1039759 out.go:239] * 
	W0729 14:46:37.895278 1039759 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 14:46:37.898735 1039759 out.go:177] 
	W0729 14:46:37.900049 1039759 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 14:46:37.900115 1039759 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 14:46:37.900146 1039759 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 14:46:37.901531 1039759 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 14:53:22 no-preload-603534 crio[704]: time="2024-07-29 14:53:22.561189358Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7340d05c-518c-4bfb-b7e0-66a04644903f name=/runtime.v1.RuntimeService/Version
	Jul 29 14:53:22 no-preload-603534 crio[704]: time="2024-07-29 14:53:22.562494256Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cd73d566-c5a2-4745-b5e6-24d5856d4d7f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:53:22 no-preload-603534 crio[704]: time="2024-07-29 14:53:22.563007736Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722264802562984157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd73d566-c5a2-4745-b5e6-24d5856d4d7f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:53:22 no-preload-603534 crio[704]: time="2024-07-29 14:53:22.563638304Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8cc9c7f7-9c0d-4d20-b6bf-573a04de9328 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:53:22 no-preload-603534 crio[704]: time="2024-07-29 14:53:22.563687975Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8cc9c7f7-9c0d-4d20-b6bf-573a04de9328 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:53:22 no-preload-603534 crio[704]: time="2024-07-29 14:53:22.564191745Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:59102fc127eadf5bca454d468d807e4ddc401c25ec256b95cfb40beab761e9ae,PodSandboxId:826d5900986dc5acb6c2971e8d13fe0b8531d69bab08a305af616eea37c3d991,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722264250913736881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7mr4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17de173c-2b95-4b35-a9d7-b38f065270cb,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c34642e35aaf3332f2279dc2227b183712e40694ad294b0123bd4603cc43332,PodSandboxId:00f7003edf93a4900eeda2295188cf2f1063d2585c35b97ac1d9ba682e280aed,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264250848035358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-m6q8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a0c38d-1587-4fdf-b2e6-58d364ca400b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d4e39be60a23aae63d02d8805442bd7a4af5e5e3d7ee48f595c49330e530bf,PodSandboxId:50ed731e85ec5c471ac3d014e97f7802dc0e2e1cd7bb1dc473bf9dc1ca079982,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264250717019254,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-vn8z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4654aadf-7870-46b6-96e6
-5948239fbe22,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb33b2f8ba13c27b9d65919a5f98e9b861ccc2280499c3faa47fa04c5e20ac6,PodSandboxId:03b9c68038f3798da0eca7a7a2e11a80c460b5b6d7aad2e68507ef3e2f7eec13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172226425050
7359919,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7336eb38-d53d-4456-8367-cf843abe5cb5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:350ebe7aa8d4ed7b7d7365d86b2145de78871d9aeba1d8c7620d4fcf38ca4b34,PodSandboxId:e4c5c23be7a6e9973420708cfa9e45bdfb5c766a812474ee569e917671f6bcd3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722264238363781983,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a507bf021d9946f3c35a7a86fe923cbf,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a09ba6b389e63cde39ed3a19ebb621799a59c8eec9365fcc5014d30c475a138,PodSandboxId:a0eb60c303bfd9dde62aa265eac035fe331cde0bfca88501291444bd7744f0b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:172226423836831
8347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030bfbd8969aea4a7e101617f158291c,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3df0b91376802f635a04378c4714bfa34e075b6304b75108a23d0a79238bde4,PodSandboxId:f26c144d676e76d81412db9908d0a5acb239b07dc9834eeb70624c20a0bbcb89,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722264238310299470,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fcccc7872085cdb6b3a955d71b243a1,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c99c444a37edac1fbf9d2386af17d001d83a4362ee1737e7e14d57f16b04005,PodSandboxId:f9ec319d1da3568e0719ca4746afd3b56a358d5a2b86e86009254954c9bd5cb7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722264238275790509,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f72a6b058d2718cb66f4eeea0a3654f,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44890ba7dc13df089f6ade07a2b5affa9a03801256b664bfe53a5fe90ffbe581,PodSandboxId:440219b39f43524fc5f0d664323b1a1731b2d76ea2d0e0fe114483030fb9cc7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722263950405226839,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030bfbd8969aea4a7e101617f158291c,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8cc9c7f7-9c0d-4d20-b6bf-573a04de9328 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:53:22 no-preload-603534 crio[704]: time="2024-07-29 14:53:22.601103074Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e5d5151f-2c95-4624-828e-1fbca4d07c4a name=/runtime.v1.RuntimeService/Version
	Jul 29 14:53:22 no-preload-603534 crio[704]: time="2024-07-29 14:53:22.601174703Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e5d5151f-2c95-4624-828e-1fbca4d07c4a name=/runtime.v1.RuntimeService/Version
	Jul 29 14:53:22 no-preload-603534 crio[704]: time="2024-07-29 14:53:22.602253289Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a6f5aecc-23ae-4631-938f-713d92683859 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:53:22 no-preload-603534 crio[704]: time="2024-07-29 14:53:22.602647504Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722264802602621160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a6f5aecc-23ae-4631-938f-713d92683859 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:53:22 no-preload-603534 crio[704]: time="2024-07-29 14:53:22.603235606Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc30f684-c41a-4998-ac9e-e05ebfa705be name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:53:22 no-preload-603534 crio[704]: time="2024-07-29 14:53:22.603308233Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc30f684-c41a-4998-ac9e-e05ebfa705be name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:53:22 no-preload-603534 crio[704]: time="2024-07-29 14:53:22.603493972Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:59102fc127eadf5bca454d468d807e4ddc401c25ec256b95cfb40beab761e9ae,PodSandboxId:826d5900986dc5acb6c2971e8d13fe0b8531d69bab08a305af616eea37c3d991,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722264250913736881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7mr4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17de173c-2b95-4b35-a9d7-b38f065270cb,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c34642e35aaf3332f2279dc2227b183712e40694ad294b0123bd4603cc43332,PodSandboxId:00f7003edf93a4900eeda2295188cf2f1063d2585c35b97ac1d9ba682e280aed,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264250848035358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-m6q8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a0c38d-1587-4fdf-b2e6-58d364ca400b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d4e39be60a23aae63d02d8805442bd7a4af5e5e3d7ee48f595c49330e530bf,PodSandboxId:50ed731e85ec5c471ac3d014e97f7802dc0e2e1cd7bb1dc473bf9dc1ca079982,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264250717019254,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-vn8z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4654aadf-7870-46b6-96e6
-5948239fbe22,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb33b2f8ba13c27b9d65919a5f98e9b861ccc2280499c3faa47fa04c5e20ac6,PodSandboxId:03b9c68038f3798da0eca7a7a2e11a80c460b5b6d7aad2e68507ef3e2f7eec13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172226425050
7359919,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7336eb38-d53d-4456-8367-cf843abe5cb5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:350ebe7aa8d4ed7b7d7365d86b2145de78871d9aeba1d8c7620d4fcf38ca4b34,PodSandboxId:e4c5c23be7a6e9973420708cfa9e45bdfb5c766a812474ee569e917671f6bcd3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722264238363781983,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a507bf021d9946f3c35a7a86fe923cbf,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a09ba6b389e63cde39ed3a19ebb621799a59c8eec9365fcc5014d30c475a138,PodSandboxId:a0eb60c303bfd9dde62aa265eac035fe331cde0bfca88501291444bd7744f0b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:172226423836831
8347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030bfbd8969aea4a7e101617f158291c,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3df0b91376802f635a04378c4714bfa34e075b6304b75108a23d0a79238bde4,PodSandboxId:f26c144d676e76d81412db9908d0a5acb239b07dc9834eeb70624c20a0bbcb89,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722264238310299470,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fcccc7872085cdb6b3a955d71b243a1,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c99c444a37edac1fbf9d2386af17d001d83a4362ee1737e7e14d57f16b04005,PodSandboxId:f9ec319d1da3568e0719ca4746afd3b56a358d5a2b86e86009254954c9bd5cb7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722264238275790509,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f72a6b058d2718cb66f4eeea0a3654f,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44890ba7dc13df089f6ade07a2b5affa9a03801256b664bfe53a5fe90ffbe581,PodSandboxId:440219b39f43524fc5f0d664323b1a1731b2d76ea2d0e0fe114483030fb9cc7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722263950405226839,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030bfbd8969aea4a7e101617f158291c,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fc30f684-c41a-4998-ac9e-e05ebfa705be name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:53:22 no-preload-603534 crio[704]: time="2024-07-29 14:53:22.632866325Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=f418fcd8-720d-4c0f-87ae-60305486dfe4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 14:53:22 no-preload-603534 crio[704]: time="2024-07-29 14:53:22.633106832Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:826d5900986dc5acb6c2971e8d13fe0b8531d69bab08a305af616eea37c3d991,Metadata:&PodSandboxMetadata{Name:kube-proxy-7mr4z,Uid:17de173c-2b95-4b35-a9d7-b38f065270cb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722264250505955076,Labels:map[string]string{controller-revision-hash: 6558c48888,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-7mr4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17de173c-2b95-4b35-a9d7-b38f065270cb,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T14:44:08.662585954Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bff127fc3c7af36fbd54e60957ad04d46f49d1c07604e328225a54407e31f00e,Metadata:&PodSandboxMetadata{Name:metrics-server-78fcd8795b-852x6,Uid:637fea9b-2924-4593-a4e2-99a
33ab613d2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722264250492197895,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-78fcd8795b-852x6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 637fea9b-2924-4593-a4e2-99a33ab613d2,k8s-app: metrics-server,pod-template-hash: 78fcd8795b,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T14:44:09.580705386Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:03b9c68038f3798da0eca7a7a2e11a80c460b5b6d7aad2e68507ef3e2f7eec13,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7336eb38-d53d-4456-8367-cf843abe5cb5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722264250356160994,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7336eb38-d
53d-4456-8367-cf843abe5cb5,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T14:44:09.446644688Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:00f7003edf93a4900eeda2295188cf2f1063d2585c35b97ac1d9ba682e280aed,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-m6q8r,Uid
:b3a0c38d-1587-4fdf-b2e6-58d364ca400b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722264250306365400,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-m6q8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a0c38d-1587-4fdf-b2e6-58d364ca400b,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T14:44:09.385876018Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:50ed731e85ec5c471ac3d014e97f7802dc0e2e1cd7bb1dc473bf9dc1ca079982,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-vn8z4,Uid:4654aadf-7870-46b6-96e6-5948239fbe22,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722264250284358121,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-vn8z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4654aadf-7870-46b6-96e6-5948239fbe22,k8s-app: kube-dns,pod-templat
e-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T14:44:09.375056851Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e4c5c23be7a6e9973420708cfa9e45bdfb5c766a812474ee569e917671f6bcd3,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-603534,Uid:a507bf021d9946f3c35a7a86fe923cbf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722264238148317681,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a507bf021d9946f3c35a7a86fe923cbf,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a507bf021d9946f3c35a7a86fe923cbf,kubernetes.io/config.seen: 2024-07-29T14:43:57.668201474Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a0eb60c303bfd9dde62aa265eac035fe331cde0bfca88501291444bd7744f0b9,Metadata:&PodSandboxMeta
data{Name:kube-apiserver-no-preload-603534,Uid:030bfbd8969aea4a7e101617f158291c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722264238147354738,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030bfbd8969aea4a7e101617f158291c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.116:8443,kubernetes.io/config.hash: 030bfbd8969aea4a7e101617f158291c,kubernetes.io/config.seen: 2024-07-29T14:43:57.668200510Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f26c144d676e76d81412db9908d0a5acb239b07dc9834eeb70624c20a0bbcb89,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-603534,Uid:8fcccc7872085cdb6b3a955d71b243a1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722264238125946141,Labels:map[string]string{component: etcd,io.kubernetes
.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fcccc7872085cdb6b3a955d71b243a1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.116:2379,kubernetes.io/config.hash: 8fcccc7872085cdb6b3a955d71b243a1,kubernetes.io/config.seen: 2024-07-29T14:43:57.668198871Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f9ec319d1da3568e0719ca4746afd3b56a358d5a2b86e86009254954c9bd5cb7,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-603534,Uid:1f72a6b058d2718cb66f4eeea0a3654f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722264238108098872,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f72a6b058d2718cb66f4eeea0a3654f,tier: control-plane,},Annotations:map[string]strin
g{kubernetes.io/config.hash: 1f72a6b058d2718cb66f4eeea0a3654f,kubernetes.io/config.seen: 2024-07-29T14:43:57.668195631Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:440219b39f43524fc5f0d664323b1a1731b2d76ea2d0e0fe114483030fb9cc7e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-603534,Uid:030bfbd8969aea4a7e101617f158291c,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722263950210856404,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030bfbd8969aea4a7e101617f158291c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.116:8443,kubernetes.io/config.hash: 030bfbd8969aea4a7e101617f158291c,kubernetes.io/config.seen: 2024-07-29T14:39:09.733640371Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/inter
ceptors.go:74" id=f418fcd8-720d-4c0f-87ae-60305486dfe4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 14:53:22 no-preload-603534 crio[704]: time="2024-07-29 14:53:22.633895324Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2a5b2ae-0cc5-4d77-9b8d-5e51f8d65816 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:53:22 no-preload-603534 crio[704]: time="2024-07-29 14:53:22.633961911Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2a5b2ae-0cc5-4d77-9b8d-5e51f8d65816 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:53:22 no-preload-603534 crio[704]: time="2024-07-29 14:53:22.634170282Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:59102fc127eadf5bca454d468d807e4ddc401c25ec256b95cfb40beab761e9ae,PodSandboxId:826d5900986dc5acb6c2971e8d13fe0b8531d69bab08a305af616eea37c3d991,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722264250913736881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7mr4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17de173c-2b95-4b35-a9d7-b38f065270cb,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c34642e35aaf3332f2279dc2227b183712e40694ad294b0123bd4603cc43332,PodSandboxId:00f7003edf93a4900eeda2295188cf2f1063d2585c35b97ac1d9ba682e280aed,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264250848035358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-m6q8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a0c38d-1587-4fdf-b2e6-58d364ca400b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d4e39be60a23aae63d02d8805442bd7a4af5e5e3d7ee48f595c49330e530bf,PodSandboxId:50ed731e85ec5c471ac3d014e97f7802dc0e2e1cd7bb1dc473bf9dc1ca079982,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264250717019254,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-vn8z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4654aadf-7870-46b6-96e6
-5948239fbe22,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb33b2f8ba13c27b9d65919a5f98e9b861ccc2280499c3faa47fa04c5e20ac6,PodSandboxId:03b9c68038f3798da0eca7a7a2e11a80c460b5b6d7aad2e68507ef3e2f7eec13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172226425050
7359919,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7336eb38-d53d-4456-8367-cf843abe5cb5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:350ebe7aa8d4ed7b7d7365d86b2145de78871d9aeba1d8c7620d4fcf38ca4b34,PodSandboxId:e4c5c23be7a6e9973420708cfa9e45bdfb5c766a812474ee569e917671f6bcd3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722264238363781983,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a507bf021d9946f3c35a7a86fe923cbf,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a09ba6b389e63cde39ed3a19ebb621799a59c8eec9365fcc5014d30c475a138,PodSandboxId:a0eb60c303bfd9dde62aa265eac035fe331cde0bfca88501291444bd7744f0b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:172226423836831
8347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030bfbd8969aea4a7e101617f158291c,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3df0b91376802f635a04378c4714bfa34e075b6304b75108a23d0a79238bde4,PodSandboxId:f26c144d676e76d81412db9908d0a5acb239b07dc9834eeb70624c20a0bbcb89,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722264238310299470,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fcccc7872085cdb6b3a955d71b243a1,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c99c444a37edac1fbf9d2386af17d001d83a4362ee1737e7e14d57f16b04005,PodSandboxId:f9ec319d1da3568e0719ca4746afd3b56a358d5a2b86e86009254954c9bd5cb7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722264238275790509,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f72a6b058d2718cb66f4eeea0a3654f,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44890ba7dc13df089f6ade07a2b5affa9a03801256b664bfe53a5fe90ffbe581,PodSandboxId:440219b39f43524fc5f0d664323b1a1731b2d76ea2d0e0fe114483030fb9cc7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722263950405226839,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030bfbd8969aea4a7e101617f158291c,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2a5b2ae-0cc5-4d77-9b8d-5e51f8d65816 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:53:22 no-preload-603534 crio[704]: time="2024-07-29 14:53:22.647078324Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cc6e495e-8dcb-46a7-aed4-4321566f1e97 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:53:22 no-preload-603534 crio[704]: time="2024-07-29 14:53:22.647171836Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cc6e495e-8dcb-46a7-aed4-4321566f1e97 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:53:22 no-preload-603534 crio[704]: time="2024-07-29 14:53:22.648045866Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9aa5d619-102e-4cfd-9d15-7ffec7444646 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:53:22 no-preload-603534 crio[704]: time="2024-07-29 14:53:22.648358487Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722264802648340229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9aa5d619-102e-4cfd-9d15-7ffec7444646 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:53:22 no-preload-603534 crio[704]: time="2024-07-29 14:53:22.648897463Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=77493331-6af2-4791-813f-a86311a33a25 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:53:22 no-preload-603534 crio[704]: time="2024-07-29 14:53:22.648942629Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=77493331-6af2-4791-813f-a86311a33a25 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:53:22 no-preload-603534 crio[704]: time="2024-07-29 14:53:22.649107521Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:59102fc127eadf5bca454d468d807e4ddc401c25ec256b95cfb40beab761e9ae,PodSandboxId:826d5900986dc5acb6c2971e8d13fe0b8531d69bab08a305af616eea37c3d991,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722264250913736881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7mr4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17de173c-2b95-4b35-a9d7-b38f065270cb,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c34642e35aaf3332f2279dc2227b183712e40694ad294b0123bd4603cc43332,PodSandboxId:00f7003edf93a4900eeda2295188cf2f1063d2585c35b97ac1d9ba682e280aed,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264250848035358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-m6q8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a0c38d-1587-4fdf-b2e6-58d364ca400b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d4e39be60a23aae63d02d8805442bd7a4af5e5e3d7ee48f595c49330e530bf,PodSandboxId:50ed731e85ec5c471ac3d014e97f7802dc0e2e1cd7bb1dc473bf9dc1ca079982,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264250717019254,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-vn8z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4654aadf-7870-46b6-96e6
-5948239fbe22,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb33b2f8ba13c27b9d65919a5f98e9b861ccc2280499c3faa47fa04c5e20ac6,PodSandboxId:03b9c68038f3798da0eca7a7a2e11a80c460b5b6d7aad2e68507ef3e2f7eec13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172226425050
7359919,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7336eb38-d53d-4456-8367-cf843abe5cb5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:350ebe7aa8d4ed7b7d7365d86b2145de78871d9aeba1d8c7620d4fcf38ca4b34,PodSandboxId:e4c5c23be7a6e9973420708cfa9e45bdfb5c766a812474ee569e917671f6bcd3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722264238363781983,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a507bf021d9946f3c35a7a86fe923cbf,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a09ba6b389e63cde39ed3a19ebb621799a59c8eec9365fcc5014d30c475a138,PodSandboxId:a0eb60c303bfd9dde62aa265eac035fe331cde0bfca88501291444bd7744f0b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:172226423836831
8347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030bfbd8969aea4a7e101617f158291c,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3df0b91376802f635a04378c4714bfa34e075b6304b75108a23d0a79238bde4,PodSandboxId:f26c144d676e76d81412db9908d0a5acb239b07dc9834eeb70624c20a0bbcb89,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722264238310299470,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fcccc7872085cdb6b3a955d71b243a1,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c99c444a37edac1fbf9d2386af17d001d83a4362ee1737e7e14d57f16b04005,PodSandboxId:f9ec319d1da3568e0719ca4746afd3b56a358d5a2b86e86009254954c9bd5cb7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722264238275790509,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f72a6b058d2718cb66f4eeea0a3654f,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44890ba7dc13df089f6ade07a2b5affa9a03801256b664bfe53a5fe90ffbe581,PodSandboxId:440219b39f43524fc5f0d664323b1a1731b2d76ea2d0e0fe114483030fb9cc7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722263950405226839,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030bfbd8969aea4a7e101617f158291c,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=77493331-6af2-4791-813f-a86311a33a25 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	59102fc127ead       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   9 minutes ago       Running             kube-proxy                0                   826d5900986dc       kube-proxy-7mr4z
	1c34642e35aaf       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   00f7003edf93a       coredns-5cfdc65f69-m6q8r
	f9d4e39be60a2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   50ed731e85ec5       coredns-5cfdc65f69-vn8z4
	bbb33b2f8ba13       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   03b9c68038f37       storage-provisioner
	1a09ba6b389e6       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   9 minutes ago       Running             kube-apiserver            2                   a0eb60c303bfd       kube-apiserver-no-preload-603534
	350ebe7aa8d4e       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   9 minutes ago       Running             kube-controller-manager   2                   e4c5c23be7a6e       kube-controller-manager-no-preload-603534
	a3df0b9137680       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   9 minutes ago       Running             etcd                      2                   f26c144d676e7       etcd-no-preload-603534
	8c99c444a37ed       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   9 minutes ago       Running             kube-scheduler            2                   f9ec319d1da35       kube-scheduler-no-preload-603534
	44890ba7dc13d       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   14 minutes ago      Exited              kube-apiserver            1                   440219b39f435       kube-apiserver-no-preload-603534
	
	
	==> coredns [1c34642e35aaf3332f2279dc2227b183712e40694ad294b0123bd4603cc43332] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f9d4e39be60a23aae63d02d8805442bd7a4af5e5e3d7ee48f595c49330e530bf] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-603534
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-603534
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=no-preload-603534
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T14_44_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 14:44:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-603534
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 14:53:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 14:49:18 +0000   Mon, 29 Jul 2024 14:43:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 14:49:18 +0000   Mon, 29 Jul 2024 14:43:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 14:49:18 +0000   Mon, 29 Jul 2024 14:43:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 14:49:18 +0000   Mon, 29 Jul 2024 14:44:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.116
	  Hostname:    no-preload-603534
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dac63dda337f45c4af568f12ef5857c7
	  System UUID:                dac63dda-337f-45c4-af56-8f12ef5857c7
	  Boot ID:                    b4dbf66e-f911-43eb-a6ce-460f01ecb2bd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-m6q8r                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m14s
	  kube-system                 coredns-5cfdc65f69-vn8z4                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m14s
	  kube-system                 etcd-no-preload-603534                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-apiserver-no-preload-603534             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-controller-manager-no-preload-603534    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-7mr4z                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	  kube-system                 kube-scheduler-no-preload-603534             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-78fcd8795b-852x6              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m13s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m11s  kube-proxy       
	  Normal  Starting                 9m19s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s  kubelet          Node no-preload-603534 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s  kubelet          Node no-preload-603534 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s  kubelet          Node no-preload-603534 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m14s  node-controller  Node no-preload-603534 event: Registered Node no-preload-603534 in Controller
	
	
	==> dmesg <==
	[  +0.047878] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.280113] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.682591] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.600780] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.811417] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.061748] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059691] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.173086] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.150180] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.297577] systemd-fstab-generator[688]: Ignoring "noauto" option for root device
	[Jul29 14:39] systemd-fstab-generator[1149]: Ignoring "noauto" option for root device
	[  +0.062616] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.993130] systemd-fstab-generator[1271]: Ignoring "noauto" option for root device
	[  +3.594472] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.391279] kauditd_printk_skb: 53 callbacks suppressed
	[ +10.056526] kauditd_printk_skb: 30 callbacks suppressed
	[Jul29 14:43] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.252677] systemd-fstab-generator[2932]: Ignoring "noauto" option for root device
	[Jul29 14:44] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.482864] systemd-fstab-generator[3253]: Ignoring "noauto" option for root device
	[  +5.396867] systemd-fstab-generator[3386]: Ignoring "noauto" option for root device
	[  +0.113093] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.778774] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [a3df0b91376802f635a04378c4714bfa34e075b6304b75108a23d0a79238bde4] <==
	{"level":"info","ts":"2024-07-29T14:43:58.616936Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T14:43:58.617228Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.61.116:2380"}
	{"level":"info","ts":"2024-07-29T14:43:58.617449Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.61.116:2380"}
	{"level":"info","ts":"2024-07-29T14:43:58.623003Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"3ff2c8dabfa88909","initial-advertise-peer-urls":["https://192.168.61.116:2380"],"listen-peer-urls":["https://192.168.61.116:2380"],"advertise-client-urls":["https://192.168.61.116:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.116:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T14:43:58.623051Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T14:43:58.966588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3ff2c8dabfa88909 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T14:43:58.966637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3ff2c8dabfa88909 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T14:43:58.966664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3ff2c8dabfa88909 received MsgPreVoteResp from 3ff2c8dabfa88909 at term 1"}
	{"level":"info","ts":"2024-07-29T14:43:58.966678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3ff2c8dabfa88909 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T14:43:58.966683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3ff2c8dabfa88909 received MsgVoteResp from 3ff2c8dabfa88909 at term 2"}
	{"level":"info","ts":"2024-07-29T14:43:58.966691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3ff2c8dabfa88909 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T14:43:58.966698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3ff2c8dabfa88909 elected leader 3ff2c8dabfa88909 at term 2"}
	{"level":"info","ts":"2024-07-29T14:43:58.970818Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"3ff2c8dabfa88909","local-member-attributes":"{Name:no-preload-603534 ClientURLs:[https://192.168.61.116:2379]}","request-path":"/0/members/3ff2c8dabfa88909/attributes","cluster-id":"d8013dd48c9fa2cd","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T14:43:58.970876Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T14:43:58.971331Z","caller":"etcdserver/server.go:2628","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:43:58.976141Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T14:43:58.979215Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T14:43:58.979326Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d8013dd48c9fa2cd","local-member-id":"3ff2c8dabfa88909","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:43:58.979404Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:43:58.979422Z","caller":"etcdserver/server.go:2652","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:43:58.979695Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T14:43:58.993042Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T14:43:58.993081Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T14:43:58.993728Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T14:43:58.994394Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.116:2379"}
	
	
	==> kernel <==
	 14:53:23 up 14 min,  0 users,  load average: 0.20, 0.21, 0.13
	Linux no-preload-603534 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1a09ba6b389e63cde39ed3a19ebb621799a59c8eec9365fcc5014d30c475a138] <==
	W0729 14:49:01.834139       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 14:49:01.834445       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0729 14:49:01.835484       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 14:49:01.835517       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 14:50:01.836422       1 handler_proxy.go:99] no RequestInfo found in the context
	W0729 14:50:01.836427       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 14:50:01.836732       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0729 14:50:01.836817       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0729 14:50:01.837889       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 14:50:01.837911       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 14:52:01.838412       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 14:52:01.838648       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0729 14:52:01.838723       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 14:52:01.838762       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0729 14:52:01.840675       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 14:52:01.840785       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [44890ba7dc13df089f6ade07a2b5affa9a03801256b664bfe53a5fe90ffbe581] <==
	W0729 14:43:50.576792       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.578289       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.602780       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.647047       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.657458       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.671063       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.725113       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.727665       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.745499       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.801221       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.807707       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.839184       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.849979       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.857993       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.862687       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.889843       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.955171       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:51.045056       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:51.086354       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:51.255212       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:51.382109       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:54.737595       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:55.189506       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:55.295155       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:55.301737       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [350ebe7aa8d4ed7b7d7365d86b2145de78871d9aeba1d8c7620d4fcf38ca4b34] <==
	E0729 14:48:08.813041       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 14:48:08.843239       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:48:38.819081       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 14:48:38.851247       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:49:08.826864       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 14:49:08.859931       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 14:49:18.705987       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-603534"
	E0729 14:49:38.833899       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 14:49:38.868612       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 14:49:52.517104       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="283.467µs"
	I0729 14:50:07.520209       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="118.532µs"
	E0729 14:50:08.840827       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 14:50:08.876300       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:50:38.846831       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 14:50:38.884116       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:51:08.854849       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 14:51:08.892207       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:51:38.861710       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 14:51:38.900396       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:52:08.869176       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 14:52:08.908231       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:52:38.875501       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 14:52:38.915397       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:53:08.882329       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 14:53:08.923271       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [59102fc127eadf5bca454d468d807e4ddc401c25ec256b95cfb40beab761e9ae] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0729 14:44:11.309249       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0729 14:44:11.320203       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.61.116"]
	E0729 14:44:11.320330       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0729 14:44:11.358497       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0729 14:44:11.358612       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 14:44:11.358645       1 server_linux.go:170] "Using iptables Proxier"
	I0729 14:44:11.362226       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0729 14:44:11.362525       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0729 14:44:11.362714       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 14:44:11.364192       1 config.go:197] "Starting service config controller"
	I0729 14:44:11.364234       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 14:44:11.364269       1 config.go:104] "Starting endpoint slice config controller"
	I0729 14:44:11.364285       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 14:44:11.365821       1 config.go:326] "Starting node config controller"
	I0729 14:44:11.365919       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 14:44:11.465397       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 14:44:11.465518       1 shared_informer.go:320] Caches are synced for service config
	I0729 14:44:11.466224       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8c99c444a37edac1fbf9d2386af17d001d83a4362ee1737e7e14d57f16b04005] <==
	W0729 14:44:00.886454       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 14:44:00.895246       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0729 14:44:01.772476       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 14:44:01.773940       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0729 14:44:01.839906       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 14:44:01.840221       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 14:44:01.872078       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 14:44:01.872163       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0729 14:44:01.924395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 14:44:01.924448       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 14:44:01.948591       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 14:44:01.948633       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0729 14:44:01.976852       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 14:44:01.976911       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0729 14:44:02.044492       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 14:44:02.044785       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 14:44:02.087444       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 14:44:02.087585       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0729 14:44:02.096509       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 14:44:02.096656       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 14:44:02.122914       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 14:44:02.123406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 14:44:02.176934       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 14:44:02.176985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0729 14:44:05.165124       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 14:51:03 no-preload-603534 kubelet[3260]: E0729 14:51:03.517923    3260 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 14:51:03 no-preload-603534 kubelet[3260]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 14:51:03 no-preload-603534 kubelet[3260]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 14:51:03 no-preload-603534 kubelet[3260]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 14:51:03 no-preload-603534 kubelet[3260]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 14:51:16 no-preload-603534 kubelet[3260]: E0729 14:51:16.498749    3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-852x6" podUID="637fea9b-2924-4593-a4e2-99a33ab613d2"
	Jul 29 14:51:29 no-preload-603534 kubelet[3260]: E0729 14:51:29.500263    3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-852x6" podUID="637fea9b-2924-4593-a4e2-99a33ab613d2"
	Jul 29 14:51:40 no-preload-603534 kubelet[3260]: E0729 14:51:40.498883    3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-852x6" podUID="637fea9b-2924-4593-a4e2-99a33ab613d2"
	Jul 29 14:51:53 no-preload-603534 kubelet[3260]: E0729 14:51:53.499771    3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-852x6" podUID="637fea9b-2924-4593-a4e2-99a33ab613d2"
	Jul 29 14:52:03 no-preload-603534 kubelet[3260]: E0729 14:52:03.517591    3260 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 14:52:03 no-preload-603534 kubelet[3260]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 14:52:03 no-preload-603534 kubelet[3260]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 14:52:03 no-preload-603534 kubelet[3260]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 14:52:03 no-preload-603534 kubelet[3260]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 14:52:06 no-preload-603534 kubelet[3260]: E0729 14:52:06.499468    3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-852x6" podUID="637fea9b-2924-4593-a4e2-99a33ab613d2"
	Jul 29 14:52:21 no-preload-603534 kubelet[3260]: E0729 14:52:21.500265    3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-852x6" podUID="637fea9b-2924-4593-a4e2-99a33ab613d2"
	Jul 29 14:52:35 no-preload-603534 kubelet[3260]: E0729 14:52:35.500835    3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-852x6" podUID="637fea9b-2924-4593-a4e2-99a33ab613d2"
	Jul 29 14:52:46 no-preload-603534 kubelet[3260]: E0729 14:52:46.500338    3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-852x6" podUID="637fea9b-2924-4593-a4e2-99a33ab613d2"
	Jul 29 14:52:58 no-preload-603534 kubelet[3260]: E0729 14:52:58.499964    3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-852x6" podUID="637fea9b-2924-4593-a4e2-99a33ab613d2"
	Jul 29 14:53:03 no-preload-603534 kubelet[3260]: E0729 14:53:03.516914    3260 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 14:53:03 no-preload-603534 kubelet[3260]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 14:53:03 no-preload-603534 kubelet[3260]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 14:53:03 no-preload-603534 kubelet[3260]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 14:53:03 no-preload-603534 kubelet[3260]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 14:53:13 no-preload-603534 kubelet[3260]: E0729 14:53:13.500396    3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-852x6" podUID="637fea9b-2924-4593-a4e2-99a33ab613d2"
	
	
	==> storage-provisioner [bbb33b2f8ba13c27b9d65919a5f98e9b861ccc2280499c3faa47fa04c5e20ac6] <==
	I0729 14:44:10.719049       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 14:44:10.815829       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 14:44:10.815937       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 14:44:10.840434       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 14:44:10.842964       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-603534_f855ee1d-7237-41ad-b4c5-a39692277466!
	I0729 14:44:10.844237       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"be182905-5f1a-4b11-a1af-a0aaaa08f016", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-603534_f855ee1d-7237-41ad-b4c5-a39692277466 became leader
	I0729 14:44:10.943739       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-603534_f855ee1d-7237-41ad-b4c5-a39692277466!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-603534 -n no-preload-603534
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-603534 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-852x6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-603534 describe pod metrics-server-78fcd8795b-852x6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-603534 describe pod metrics-server-78fcd8795b-852x6: exit status 1 (66.298474ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-852x6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-603534 describe pod metrics-server-78fcd8795b-852x6: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:46:54.641532  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/auto-513289/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:47:06.666166  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:47:22.664402  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/calico-513289/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:47:33.204482  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kindnet-513289/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:47:33.714019  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:47:59.775607  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/custom-flannel-513289/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:48:08.882245  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/enable-default-cni-513289/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:48:45.707068  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/calico-513289/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:48:55.651706  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/flannel-513289/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:49:00.040801  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/bridge-513289/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:49:22.821211  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/custom-flannel-513289/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:49:30.662637  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:49:31.927060  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/enable-default-cni-513289/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:50:18.696970  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/flannel-513289/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:50:23.086703  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/bridge-513289/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:50:31.595837  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/auto-513289/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:51:10.159159  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kindnet-513289/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:52:06.665632  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:52:22.664039  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/calico-513289/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:52:59.776221  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/custom-flannel-513289/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:53:08.882457  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/enable-default-cni-513289/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:53:55.652359  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/flannel-513289/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:54:00.040553  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/bridge-513289/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:54:30.662567  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:55:09.719939  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:55:31.596141  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/auto-513289/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-360866 -n old-k8s-version-360866
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-360866 -n old-k8s-version-360866: exit status 2 (229.706995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-360866" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-360866 -n old-k8s-version-360866
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-360866 -n old-k8s-version-360866: exit status 2 (220.742929ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-360866 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-360866 logs -n 25: (1.530689468s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-513289 sudo cat                             | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo                                 | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo                                 | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo                                 | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo find                            | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo crio                            | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-513289                                      | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	| delete  | -p                                                     | disable-driver-mounts-054967 | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | disable-driver-mounts-054967                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:31 UTC |
	|         | default-k8s-diff-port-751306                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-603534             | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:30 UTC | 29 Jul 24 14:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-603534                                   | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-668123            | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC | 29 Jul 24 14:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-668123                                  | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-751306  | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC | 29 Jul 24 14:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC |                     |
	|         | default-k8s-diff-port-751306                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-603534                  | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-603534 --memory=2200                     | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:32 UTC | 29 Jul 24 14:44 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-360866        | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:33 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-668123                 | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-668123                                  | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:33 UTC | 29 Jul 24 14:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-751306       | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC | 29 Jul 24 14:43 UTC |
	|         | default-k8s-diff-port-751306                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-360866                              | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC | 29 Jul 24 14:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-360866             | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC | 29 Jul 24 14:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-360866                              | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 14:34:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 14:34:53.874295 1039759 out.go:291] Setting OutFile to fd 1 ...
	I0729 14:34:53.874567 1039759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:34:53.874577 1039759 out.go:304] Setting ErrFile to fd 2...
	I0729 14:34:53.874580 1039759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:34:53.874774 1039759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 14:34:53.875294 1039759 out.go:298] Setting JSON to false
	I0729 14:34:53.876313 1039759 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":15446,"bootTime":1722248248,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 14:34:53.876373 1039759 start.go:139] virtualization: kvm guest
	I0729 14:34:53.878446 1039759 out.go:177] * [old-k8s-version-360866] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 14:34:53.879820 1039759 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 14:34:53.879855 1039759 notify.go:220] Checking for updates...
	I0729 14:34:53.882201 1039759 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 14:34:53.883330 1039759 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:34:53.884514 1039759 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:34:53.885734 1039759 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 14:34:53.886894 1039759 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 14:34:53.888361 1039759 config.go:182] Loaded profile config "old-k8s-version-360866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 14:34:53.888789 1039759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:34:53.888850 1039759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:34:53.903960 1039759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I0729 14:34:53.904467 1039759 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:34:53.905083 1039759 main.go:141] libmachine: Using API Version  1
	I0729 14:34:53.905112 1039759 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:34:53.905449 1039759 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:34:53.905609 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:34:53.907360 1039759 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 14:34:53.908710 1039759 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 14:34:53.909026 1039759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:34:53.909064 1039759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:34:53.923834 1039759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0729 14:34:53.924300 1039759 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:34:53.924787 1039759 main.go:141] libmachine: Using API Version  1
	I0729 14:34:53.924809 1039759 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:34:53.925150 1039759 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:34:53.925352 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:34:53.960368 1039759 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 14:34:53.961649 1039759 start.go:297] selected driver: kvm2
	I0729 14:34:53.961662 1039759 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:34:53.961778 1039759 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 14:34:53.962398 1039759 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:34:53.962459 1039759 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19338-974764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 14:34:53.977941 1039759 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 14:34:53.978311 1039759 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:34:53.978341 1039759 cni.go:84] Creating CNI manager for ""
	I0729 14:34:53.978350 1039759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:34:53.978395 1039759 start.go:340] cluster config:
	{Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:34:53.978499 1039759 iso.go:125] acquiring lock: {Name:mk2bc72146110e230952d77b90cad2ea8182c9d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:34:53.980167 1039759 out.go:177] * Starting "old-k8s-version-360866" primary control-plane node in "old-k8s-version-360866" cluster
	I0729 14:34:55.588663 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:34:53.981356 1039759 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 14:34:53.981390 1039759 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 14:34:53.981400 1039759 cache.go:56] Caching tarball of preloaded images
	I0729 14:34:53.981477 1039759 preload.go:172] Found /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 14:34:53.981487 1039759 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 14:34:53.981600 1039759 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/config.json ...
	I0729 14:34:53.981775 1039759 start.go:360] acquireMachinesLock for old-k8s-version-360866: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 14:34:58.660730 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:04.740665 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:07.812781 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:13.892659 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:16.964692 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:23.044749 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:26.116761 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:32.196730 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:35.268709 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:41.348712 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:44.420693 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:50.500715 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:53.572717 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:59.652707 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:02.724722 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:08.804719 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:11.876665 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:17.956684 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:21.028707 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:27.108667 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:30.180710 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:36.260645 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:39.332717 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:45.412694 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:48.484713 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:54.564703 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:57.636707 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:03.716690 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:06.788660 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:12.868658 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:15.940708 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:22.020684 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:25.092712 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:31.172710 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:34.177216 1039263 start.go:364] duration metric: took 3m42.890532077s to acquireMachinesLock for "embed-certs-668123"
	I0729 14:37:34.177291 1039263 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:37:34.177300 1039263 fix.go:54] fixHost starting: 
	I0729 14:37:34.177641 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:37:34.177673 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:37:34.193427 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37577
	I0729 14:37:34.193879 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:37:34.194396 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:37:34.194421 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:37:34.194774 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:37:34.194987 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:34.195156 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:37:34.196597 1039263 fix.go:112] recreateIfNeeded on embed-certs-668123: state=Stopped err=<nil>
	I0729 14:37:34.196642 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	W0729 14:37:34.196802 1039263 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:37:34.198564 1039263 out.go:177] * Restarting existing kvm2 VM for "embed-certs-668123" ...
	I0729 14:37:34.199926 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Start
	I0729 14:37:34.200086 1039263 main.go:141] libmachine: (embed-certs-668123) Ensuring networks are active...
	I0729 14:37:34.200833 1039263 main.go:141] libmachine: (embed-certs-668123) Ensuring network default is active
	I0729 14:37:34.201159 1039263 main.go:141] libmachine: (embed-certs-668123) Ensuring network mk-embed-certs-668123 is active
	I0729 14:37:34.201578 1039263 main.go:141] libmachine: (embed-certs-668123) Getting domain xml...
	I0729 14:37:34.202214 1039263 main.go:141] libmachine: (embed-certs-668123) Creating domain...
	I0729 14:37:34.510575 1039263 main.go:141] libmachine: (embed-certs-668123) Waiting to get IP...
	I0729 14:37:34.511459 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:34.511909 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:34.512006 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:34.511904 1040307 retry.go:31] will retry after 294.890973ms: waiting for machine to come up
	I0729 14:37:34.808513 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:34.809044 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:34.809070 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:34.809007 1040307 retry.go:31] will retry after 296.152247ms: waiting for machine to come up
	I0729 14:37:35.106423 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:35.106839 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:35.106872 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:35.106773 1040307 retry.go:31] will retry after 384.830082ms: waiting for machine to come up
	I0729 14:37:35.493463 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:35.493902 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:35.493933 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:35.493861 1040307 retry.go:31] will retry after 490.673812ms: waiting for machine to come up
	I0729 14:37:35.986675 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:35.987184 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:35.987235 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:35.987099 1040307 retry.go:31] will retry after 725.022775ms: waiting for machine to come up
	I0729 14:37:34.174673 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:37:34.174713 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:37:34.175060 1038758 buildroot.go:166] provisioning hostname "no-preload-603534"
	I0729 14:37:34.175084 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:37:34.175279 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:37:34.177042 1038758 machine.go:97] duration metric: took 4m37.39644293s to provisionDockerMachine
	I0729 14:37:34.177087 1038758 fix.go:56] duration metric: took 4m37.417815827s for fixHost
	I0729 14:37:34.177094 1038758 start.go:83] releasing machines lock for "no-preload-603534", held for 4m37.417912853s
	W0729 14:37:34.177127 1038758 start.go:714] error starting host: provision: host is not running
	W0729 14:37:34.177230 1038758 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 14:37:34.177240 1038758 start.go:729] Will try again in 5 seconds ...
	I0729 14:37:36.714078 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:36.714502 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:36.714565 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:36.714389 1040307 retry.go:31] will retry after 722.684756ms: waiting for machine to come up
	I0729 14:37:37.438316 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:37.438859 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:37.438891 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:37.438802 1040307 retry.go:31] will retry after 1.163999997s: waiting for machine to come up
	I0729 14:37:38.604109 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:38.604503 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:38.604531 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:38.604469 1040307 retry.go:31] will retry after 1.401566003s: waiting for machine to come up
	I0729 14:37:40.007310 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:40.007900 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:40.007929 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:40.007839 1040307 retry.go:31] will retry after 1.40470791s: waiting for machine to come up
	I0729 14:37:39.178982 1038758 start.go:360] acquireMachinesLock for no-preload-603534: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 14:37:41.414509 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:41.415018 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:41.415049 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:41.414959 1040307 retry.go:31] will retry after 2.205183048s: waiting for machine to come up
	I0729 14:37:43.623427 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:43.623894 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:43.623922 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:43.623856 1040307 retry.go:31] will retry after 2.444881913s: waiting for machine to come up
	I0729 14:37:46.070961 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:46.071314 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:46.071338 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:46.071271 1040307 retry.go:31] will retry after 3.115189863s: waiting for machine to come up
	I0729 14:37:49.187610 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:49.188107 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:49.188134 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:49.188054 1040307 retry.go:31] will retry after 3.139484284s: waiting for machine to come up
	I0729 14:37:53.653416 1039440 start.go:364] duration metric: took 3m41.12464482s to acquireMachinesLock for "default-k8s-diff-port-751306"
	I0729 14:37:53.653486 1039440 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:37:53.653494 1039440 fix.go:54] fixHost starting: 
	I0729 14:37:53.653880 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:37:53.653913 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:37:53.671499 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34797
	I0729 14:37:53.671927 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:37:53.672550 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:37:53.672584 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:37:53.672986 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:37:53.673198 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:37:53.673353 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:37:53.674706 1039440 fix.go:112] recreateIfNeeded on default-k8s-diff-port-751306: state=Stopped err=<nil>
	I0729 14:37:53.674736 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	W0729 14:37:53.674896 1039440 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:37:53.677098 1039440 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-751306" ...
	I0729 14:37:52.329477 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.329880 1039263 main.go:141] libmachine: (embed-certs-668123) Found IP for machine: 192.168.50.53
	I0729 14:37:52.329895 1039263 main.go:141] libmachine: (embed-certs-668123) Reserving static IP address...
	I0729 14:37:52.329906 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has current primary IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.330376 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "embed-certs-668123", mac: "52:54:00:a3:92:a4", ip: "192.168.50.53"} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.330414 1039263 main.go:141] libmachine: (embed-certs-668123) Reserved static IP address: 192.168.50.53
	I0729 14:37:52.330433 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | skip adding static IP to network mk-embed-certs-668123 - found existing host DHCP lease matching {name: "embed-certs-668123", mac: "52:54:00:a3:92:a4", ip: "192.168.50.53"}
	I0729 14:37:52.330453 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Getting to WaitForSSH function...
	I0729 14:37:52.330465 1039263 main.go:141] libmachine: (embed-certs-668123) Waiting for SSH to be available...
	I0729 14:37:52.332510 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.332794 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.332821 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.332897 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Using SSH client type: external
	I0729 14:37:52.332931 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa (-rw-------)
	I0729 14:37:52.332963 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:37:52.332976 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | About to run SSH command:
	I0729 14:37:52.332989 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | exit 0
	I0729 14:37:52.456152 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | SSH cmd err, output: <nil>: 
	I0729 14:37:52.456532 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetConfigRaw
	I0729 14:37:52.457156 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetIP
	I0729 14:37:52.459620 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.459946 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.459980 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.460200 1039263 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/config.json ...
	I0729 14:37:52.460384 1039263 machine.go:94] provisionDockerMachine start ...
	I0729 14:37:52.460404 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:52.460672 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:52.462798 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.463089 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.463119 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.463260 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:52.463428 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.463594 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.463703 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:52.463856 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:52.464071 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:52.464080 1039263 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:37:52.564925 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 14:37:52.564959 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetMachineName
	I0729 14:37:52.565214 1039263 buildroot.go:166] provisioning hostname "embed-certs-668123"
	I0729 14:37:52.565241 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetMachineName
	I0729 14:37:52.565472 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:52.568131 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.568450 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.568482 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.568615 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:52.568825 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.568975 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.569143 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:52.569335 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:52.569511 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:52.569522 1039263 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-668123 && echo "embed-certs-668123" | sudo tee /etc/hostname
	I0729 14:37:52.686424 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-668123
	
	I0729 14:37:52.686459 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:52.689074 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.689387 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.689422 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.689619 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:52.689825 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.689999 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.690164 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:52.690338 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:52.690511 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:52.690526 1039263 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-668123' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-668123/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-668123' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:37:52.801778 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:37:52.801812 1039263 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:37:52.801841 1039263 buildroot.go:174] setting up certificates
	I0729 14:37:52.801851 1039263 provision.go:84] configureAuth start
	I0729 14:37:52.801863 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetMachineName
	I0729 14:37:52.802133 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetIP
	I0729 14:37:52.804526 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.804877 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.804910 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.805053 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:52.807140 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.807369 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.807395 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.807505 1039263 provision.go:143] copyHostCerts
	I0729 14:37:52.807594 1039263 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:37:52.807608 1039263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:37:52.807698 1039263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:37:52.807840 1039263 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:37:52.807852 1039263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:37:52.807891 1039263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:37:52.807969 1039263 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:37:52.807979 1039263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:37:52.808011 1039263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:37:52.808084 1039263 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.embed-certs-668123 san=[127.0.0.1 192.168.50.53 embed-certs-668123 localhost minikube]
	I0729 14:37:53.007382 1039263 provision.go:177] copyRemoteCerts
	I0729 14:37:53.007459 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:37:53.007548 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.010097 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.010465 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.010488 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.010660 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.010864 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.011037 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.011193 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:37:53.092043 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 14:37:53.116737 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:37:53.139828 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:37:53.162813 1039263 provision.go:87] duration metric: took 360.943219ms to configureAuth
	I0729 14:37:53.162856 1039263 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:37:53.163051 1039263 config.go:182] Loaded profile config "embed-certs-668123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:37:53.163144 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.165757 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.166102 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.166130 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.166272 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.166465 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.166665 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.166817 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.166983 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:53.167154 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:53.167169 1039263 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:37:53.428217 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:37:53.428246 1039263 machine.go:97] duration metric: took 967.84942ms to provisionDockerMachine
	I0729 14:37:53.428258 1039263 start.go:293] postStartSetup for "embed-certs-668123" (driver="kvm2")
	I0729 14:37:53.428269 1039263 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:37:53.428298 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.428641 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:37:53.428669 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.431228 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.431593 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.431620 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.431797 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.431992 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.432159 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.432313 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:37:53.511226 1039263 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:37:53.515527 1039263 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:37:53.515555 1039263 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:37:53.515635 1039263 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:37:53.515724 1039263 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:37:53.515846 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:37:53.525606 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:37:53.548757 1039263 start.go:296] duration metric: took 120.484005ms for postStartSetup
	I0729 14:37:53.548798 1039263 fix.go:56] duration metric: took 19.371497305s for fixHost
	I0729 14:37:53.548827 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.551373 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.551697 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.551725 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.551866 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.552085 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.552226 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.552383 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.552574 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:53.552746 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:53.552756 1039263 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:37:53.653267 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263873.628230451
	
	I0729 14:37:53.653291 1039263 fix.go:216] guest clock: 1722263873.628230451
	I0729 14:37:53.653301 1039263 fix.go:229] Guest: 2024-07-29 14:37:53.628230451 +0000 UTC Remote: 2024-07-29 14:37:53.548802078 +0000 UTC m=+242.399919494 (delta=79.428373ms)
	I0729 14:37:53.653329 1039263 fix.go:200] guest clock delta is within tolerance: 79.428373ms
	I0729 14:37:53.653337 1039263 start.go:83] releasing machines lock for "embed-certs-668123", held for 19.476079428s
	I0729 14:37:53.653364 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.653673 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetIP
	I0729 14:37:53.656383 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.656805 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.656836 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.656958 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.657597 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.657831 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.657923 1039263 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:37:53.657981 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.658101 1039263 ssh_runner.go:195] Run: cat /version.json
	I0729 14:37:53.658129 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.660964 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.661044 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.661349 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.661374 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.661400 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.661446 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.661628 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.661711 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.661795 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.661918 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.662012 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.662092 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.662200 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:37:53.662234 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:37:53.764286 1039263 ssh_runner.go:195] Run: systemctl --version
	I0729 14:37:53.772936 1039263 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:37:53.922874 1039263 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:37:53.928953 1039263 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:37:53.929035 1039263 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:37:53.947388 1039263 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:37:53.947417 1039263 start.go:495] detecting cgroup driver to use...
	I0729 14:37:53.947496 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:37:53.964141 1039263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:37:53.985980 1039263 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:37:53.986042 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:37:54.009646 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:37:54.023449 1039263 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:37:54.139511 1039263 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:37:54.312559 1039263 docker.go:233] disabling docker service ...
	I0729 14:37:54.312655 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:37:54.327466 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:37:54.342225 1039263 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:37:54.485007 1039263 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:37:54.623987 1039263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:37:54.638100 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:37:54.658833 1039263 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 14:37:54.658911 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.670274 1039263 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:37:54.670366 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.681548 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.691626 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.701915 1039263 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:37:54.713399 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.723631 1039263 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.740625 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.751521 1039263 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:37:54.761895 1039263 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:37:54.761942 1039263 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:37:54.775663 1039263 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:37:54.785415 1039263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:37:54.933441 1039263 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:37:55.066449 1039263 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:37:55.066539 1039263 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:37:55.071614 1039263 start.go:563] Will wait 60s for crictl version
	I0729 14:37:55.071671 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:37:55.075727 1039263 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:37:55.117286 1039263 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:37:55.117395 1039263 ssh_runner.go:195] Run: crio --version
	I0729 14:37:55.145732 1039263 ssh_runner.go:195] Run: crio --version
	I0729 14:37:55.179714 1039263 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 14:37:55.181109 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetIP
	I0729 14:37:55.184274 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:55.184734 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:55.184761 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:55.185066 1039263 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 14:37:55.190374 1039263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:37:55.206768 1039263 kubeadm.go:883] updating cluster {Name:embed-certs-668123 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-668123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:37:55.207054 1039263 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 14:37:55.207130 1039263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:37:55.247814 1039263 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 14:37:55.247890 1039263 ssh_runner.go:195] Run: which lz4
	I0729 14:37:55.251992 1039263 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 14:37:55.256440 1039263 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 14:37:55.256468 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 14:37:53.678402 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Start
	I0729 14:37:53.678610 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Ensuring networks are active...
	I0729 14:37:53.679311 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Ensuring network default is active
	I0729 14:37:53.679767 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Ensuring network mk-default-k8s-diff-port-751306 is active
	I0729 14:37:53.680133 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Getting domain xml...
	I0729 14:37:53.680818 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Creating domain...
	I0729 14:37:54.024601 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting to get IP...
	I0729 14:37:54.025431 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.025838 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.025944 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:54.025837 1040438 retry.go:31] will retry after 280.254814ms: waiting for machine to come up
	I0729 14:37:54.307727 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.308260 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.308293 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:54.308220 1040438 retry.go:31] will retry after 384.348242ms: waiting for machine to come up
	I0729 14:37:54.693703 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.694304 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.694334 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:54.694251 1040438 retry.go:31] will retry after 417.76448ms: waiting for machine to come up
	I0729 14:37:55.113670 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:55.114243 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:55.114272 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:55.114191 1040438 retry.go:31] will retry after 589.741485ms: waiting for machine to come up
	I0729 14:37:55.706127 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:55.706613 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:55.706646 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:55.706569 1040438 retry.go:31] will retry after 471.427821ms: waiting for machine to come up
	I0729 14:37:56.179380 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:56.179867 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:56.179896 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:56.179814 1040438 retry.go:31] will retry after 624.275074ms: waiting for machine to come up
	I0729 14:37:56.805673 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:56.806042 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:56.806063 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:56.806018 1040438 retry.go:31] will retry after 1.027377333s: waiting for machine to come up
	I0729 14:37:56.743842 1039263 crio.go:462] duration metric: took 1.49188656s to copy over tarball
	I0729 14:37:56.743941 1039263 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 14:37:58.879573 1039263 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.135595087s)
	I0729 14:37:58.879619 1039263 crio.go:469] duration metric: took 2.135735155s to extract the tarball
	I0729 14:37:58.879628 1039263 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 14:37:58.916966 1039263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:37:58.958323 1039263 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 14:37:58.958349 1039263 cache_images.go:84] Images are preloaded, skipping loading
	I0729 14:37:58.958357 1039263 kubeadm.go:934] updating node { 192.168.50.53 8443 v1.30.3 crio true true} ...
	I0729 14:37:58.958537 1039263 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-668123 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-668123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:37:58.958632 1039263 ssh_runner.go:195] Run: crio config
	I0729 14:37:59.004120 1039263 cni.go:84] Creating CNI manager for ""
	I0729 14:37:59.004146 1039263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:37:59.004163 1039263 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:37:59.004192 1039263 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.53 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-668123 NodeName:embed-certs-668123 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 14:37:59.004371 1039263 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-668123"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:37:59.004469 1039263 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 14:37:59.014796 1039263 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:37:59.014866 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:37:59.024575 1039263 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0729 14:37:59.040707 1039263 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 14:37:59.056693 1039263 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0729 14:37:59.073320 1039263 ssh_runner.go:195] Run: grep 192.168.50.53	control-plane.minikube.internal$ /etc/hosts
	I0729 14:37:59.077226 1039263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:37:59.091283 1039263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:37:59.221532 1039263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:37:59.239319 1039263 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123 for IP: 192.168.50.53
	I0729 14:37:59.239362 1039263 certs.go:194] generating shared ca certs ...
	I0729 14:37:59.239387 1039263 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:37:59.239604 1039263 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:37:59.239654 1039263 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:37:59.239667 1039263 certs.go:256] generating profile certs ...
	I0729 14:37:59.239818 1039263 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/client.key
	I0729 14:37:59.239922 1039263 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/apiserver.key.544998fe
	I0729 14:37:59.239969 1039263 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/proxy-client.key
	I0729 14:37:59.240137 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:37:59.240188 1039263 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:37:59.240202 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:37:59.240238 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:37:59.240280 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:37:59.240313 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:37:59.240385 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:37:59.241551 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:37:59.278842 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:37:59.305668 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:37:59.332314 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:37:59.377867 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 14:37:59.405592 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 14:37:59.438073 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:37:59.462130 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 14:37:59.489158 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:37:59.511811 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:37:59.534728 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:37:59.558680 1039263 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:37:59.575404 1039263 ssh_runner.go:195] Run: openssl version
	I0729 14:37:59.581518 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:37:59.592024 1039263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:37:59.596913 1039263 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:37:59.596983 1039263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:37:59.602973 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:37:59.613891 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:37:59.624053 1039263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:37:59.628881 1039263 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:37:59.628922 1039263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:37:59.634672 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:37:59.645513 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:37:59.656385 1039263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:37:59.661141 1039263 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:37:59.661209 1039263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:37:59.667169 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:37:59.678240 1039263 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:37:59.683075 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:37:59.689013 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:37:59.694754 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:37:59.700865 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:37:59.706664 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:37:59.712457 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:37:59.718347 1039263 kubeadm.go:392] StartCluster: {Name:embed-certs-668123 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-668123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:37:59.718460 1039263 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:37:59.718505 1039263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:37:59.756046 1039263 cri.go:89] found id: ""
	I0729 14:37:59.756143 1039263 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:37:59.766198 1039263 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 14:37:59.766222 1039263 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 14:37:59.766278 1039263 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 14:37:59.775664 1039263 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:37:59.776877 1039263 kubeconfig.go:125] found "embed-certs-668123" server: "https://192.168.50.53:8443"
	I0729 14:37:59.778802 1039263 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 14:37:59.787805 1039263 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.53
	I0729 14:37:59.787840 1039263 kubeadm.go:1160] stopping kube-system containers ...
	I0729 14:37:59.787854 1039263 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 14:37:59.787908 1039263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:37:59.828927 1039263 cri.go:89] found id: ""
	I0729 14:37:59.829016 1039263 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 14:37:59.844889 1039263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:37:59.854233 1039263 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:37:59.854264 1039263 kubeadm.go:157] found existing configuration files:
	
	I0729 14:37:59.854334 1039263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:37:59.863123 1039263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:37:59.863183 1039263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:37:59.872154 1039263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:37:59.880819 1039263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:37:59.880881 1039263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:37:59.889714 1039263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:37:59.898278 1039263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:37:59.898330 1039263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:37:59.907358 1039263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:37:59.916352 1039263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:37:59.916430 1039263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:37:59.925239 1039263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:37:59.934353 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:00.045086 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:00.793783 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:01.009839 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:01.080217 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:01.153377 1039263 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:38:01.153496 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:37:57.835202 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:57.835636 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:57.835674 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:57.835572 1040438 retry.go:31] will retry after 987.763901ms: waiting for machine to come up
	I0729 14:37:58.824975 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:58.825428 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:58.825457 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:58.825348 1040438 retry.go:31] will retry after 1.189429393s: waiting for machine to come up
	I0729 14:38:00.016130 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:00.016569 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:38:00.016604 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:38:00.016509 1040438 retry.go:31] will retry after 1.424039091s: waiting for machine to come up
	I0729 14:38:01.443138 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:01.443511 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:38:01.443540 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:38:01.443470 1040438 retry.go:31] will retry after 2.531090823s: waiting for machine to come up
	I0729 14:38:01.653905 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:02.153772 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:02.653590 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:02.669429 1039263 api_server.go:72] duration metric: took 1.516051254s to wait for apiserver process to appear ...
	I0729 14:38:02.669467 1039263 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:38:02.669495 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:05.531413 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:38:05.531451 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:38:05.531467 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:05.602173 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:38:05.602205 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:38:05.670522 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:05.680835 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:05.680861 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:06.170512 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:06.176052 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:06.176084 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:06.669679 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:06.674813 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:06.674854 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:07.170539 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:07.174573 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 200:
	ok
	I0729 14:38:07.180250 1039263 api_server.go:141] control plane version: v1.30.3
	I0729 14:38:07.180275 1039263 api_server.go:131] duration metric: took 4.510799806s to wait for apiserver health ...
	I0729 14:38:07.180284 1039263 cni.go:84] Creating CNI manager for ""
	I0729 14:38:07.180290 1039263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:38:07.181866 1039263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:38:03.976004 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:03.976514 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:38:03.976544 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:38:03.976474 1040438 retry.go:31] will retry after 3.356304099s: waiting for machine to come up
	I0729 14:38:07.335600 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:07.336031 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:38:07.336086 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:38:07.335992 1040438 retry.go:31] will retry after 3.345416128s: waiting for machine to come up
	I0729 14:38:07.182966 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:38:07.193166 1039263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:38:07.212801 1039263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:38:07.221297 1039263 system_pods.go:59] 8 kube-system pods found
	I0729 14:38:07.221331 1039263 system_pods.go:61] "coredns-7db6d8ff4d-6dhzz" [c680e565-fe93-4072-8fe8-6fd440ae5675] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 14:38:07.221340 1039263 system_pods.go:61] "etcd-embed-certs-668123" [3244d6a8-3aa2-406a-86fe-9770f5b8541a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 14:38:07.221347 1039263 system_pods.go:61] "kube-apiserver-embed-certs-668123" [a00570e4-b496-4083-b280-4125643e475e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 14:38:07.221352 1039263 system_pods.go:61] "kube-controller-manager-embed-certs-668123" [cec685e1-4d5f-4210-a115-e3766c962f07] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 14:38:07.221364 1039263 system_pods.go:61] "kube-proxy-2v79q" [e43e850d-b94e-467c-bf0f-0eac3828f54f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 14:38:07.221370 1039263 system_pods.go:61] "kube-scheduler-embed-certs-668123" [4037d948-faed-49c9-b321-6a4be51b9ea9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 14:38:07.221379 1039263 system_pods.go:61] "metrics-server-569cc877fc-5msnp" [eb9cd6f7-caf5-4b18-b0d6-0f01add839ce] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:38:07.221384 1039263 system_pods.go:61] "storage-provisioner" [ecdab0df-406c-4f3c-b8fe-34a48b7f1e0a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 14:38:07.221390 1039263 system_pods.go:74] duration metric: took 8.574498ms to wait for pod list to return data ...
	I0729 14:38:07.221397 1039263 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:38:07.224197 1039263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:38:07.224220 1039263 node_conditions.go:123] node cpu capacity is 2
	I0729 14:38:07.224231 1039263 node_conditions.go:105] duration metric: took 2.829585ms to run NodePressure ...
	I0729 14:38:07.224246 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:07.520049 1039263 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 14:38:07.524228 1039263 kubeadm.go:739] kubelet initialised
	I0729 14:38:07.524251 1039263 kubeadm.go:740] duration metric: took 4.174563ms waiting for restarted kubelet to initialise ...
	I0729 14:38:07.524262 1039263 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:38:07.529174 1039263 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:07.533534 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.533554 1039263 pod_ready.go:81] duration metric: took 4.355926ms for pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:07.533562 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.533567 1039263 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:07.537529 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "etcd-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.537550 1039263 pod_ready.go:81] duration metric: took 3.975082ms for pod "etcd-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:07.537561 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "etcd-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.537567 1039263 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:07.542299 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.542325 1039263 pod_ready.go:81] duration metric: took 4.747863ms for pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:07.542371 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.542390 1039263 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:07.616688 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.616725 1039263 pod_ready.go:81] duration metric: took 74.323327ms for pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:07.616740 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.616750 1039263 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2v79q" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:08.016334 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "kube-proxy-2v79q" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.016360 1039263 pod_ready.go:81] duration metric: took 399.599984ms for pod "kube-proxy-2v79q" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:08.016369 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "kube-proxy-2v79q" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.016374 1039263 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:08.416536 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.416571 1039263 pod_ready.go:81] duration metric: took 400.189243ms for pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:08.416585 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.416594 1039263 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:08.817526 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.817561 1039263 pod_ready.go:81] duration metric: took 400.956263ms for pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:08.817572 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.817590 1039263 pod_ready.go:38] duration metric: took 1.293313082s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:38:08.817610 1039263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 14:38:08.829394 1039263 ops.go:34] apiserver oom_adj: -16
	I0729 14:38:08.829425 1039263 kubeadm.go:597] duration metric: took 9.06319609s to restartPrimaryControlPlane
	I0729 14:38:08.829436 1039263 kubeadm.go:394] duration metric: took 9.111098315s to StartCluster
	I0729 14:38:08.829457 1039263 settings.go:142] acquiring lock: {Name:mke61e73d7bb1a5bd9c2f4c9e9bba0a07b199ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:08.829548 1039263 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:38:08.831113 1039263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:08.831396 1039263 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:38:08.831441 1039263 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 14:38:08.831524 1039263 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-668123"
	I0729 14:38:08.831539 1039263 addons.go:69] Setting default-storageclass=true in profile "embed-certs-668123"
	I0729 14:38:08.831562 1039263 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-668123"
	W0729 14:38:08.831572 1039263 addons.go:243] addon storage-provisioner should already be in state true
	I0729 14:38:08.831561 1039263 addons.go:69] Setting metrics-server=true in profile "embed-certs-668123"
	I0729 14:38:08.831593 1039263 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-668123"
	I0729 14:38:08.831601 1039263 addons.go:234] Setting addon metrics-server=true in "embed-certs-668123"
	I0729 14:38:08.831609 1039263 host.go:66] Checking if "embed-certs-668123" exists ...
	W0729 14:38:08.831610 1039263 addons.go:243] addon metrics-server should already be in state true
	I0729 14:38:08.831632 1039263 config.go:182] Loaded profile config "embed-certs-668123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:38:08.831644 1039263 host.go:66] Checking if "embed-certs-668123" exists ...
	I0729 14:38:08.831916 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.831933 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.831918 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.831956 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.831945 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.831964 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.833223 1039263 out.go:177] * Verifying Kubernetes components...
	I0729 14:38:08.834403 1039263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:08.847231 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I0729 14:38:08.847362 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37467
	I0729 14:38:08.847398 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44737
	I0729 14:38:08.847797 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.847896 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.847904 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.848350 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.848371 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.848487 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.848507 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.848520 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.848540 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.848774 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.848854 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.848867 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.849010 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:38:08.849363 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.849363 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.849392 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.849416 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.851933 1039263 addons.go:234] Setting addon default-storageclass=true in "embed-certs-668123"
	W0729 14:38:08.851955 1039263 addons.go:243] addon default-storageclass should already be in state true
	I0729 14:38:08.851988 1039263 host.go:66] Checking if "embed-certs-668123" exists ...
	I0729 14:38:08.852284 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.852330 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.865255 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34389
	I0729 14:38:08.865707 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.865981 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36925
	I0729 14:38:08.866157 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.866183 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.866419 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.866531 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.866804 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:38:08.866895 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.866920 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.867272 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.867839 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.867885 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.868000 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46413
	I0729 14:38:08.868397 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.868741 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:38:08.868886 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.868903 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.869276 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.869501 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:38:08.870835 1039263 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 14:38:08.871289 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:38:08.872267 1039263 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 14:38:08.872289 1039263 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 14:38:08.872306 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:38:08.873165 1039263 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:08.874593 1039263 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:38:08.874616 1039263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 14:38:08.874635 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:38:08.875061 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.875572 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:38:08.875605 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.875815 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:38:08.876012 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:38:08.876208 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:38:08.876370 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:38:08.877997 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.878394 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:38:08.878423 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.878555 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:38:08.878726 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:38:08.878889 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:38:08.879002 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:38:08.890720 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I0729 14:38:08.891092 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.891619 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.891638 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.891972 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.892184 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:38:08.893577 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:38:08.893817 1039263 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 14:38:08.893840 1039263 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 14:38:08.893859 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:38:08.896843 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.897302 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:38:08.897320 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.897464 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:38:08.897618 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:38:08.897866 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:38:08.897966 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:38:09.019001 1039263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:38:09.038038 1039263 node_ready.go:35] waiting up to 6m0s for node "embed-certs-668123" to be "Ready" ...
	I0729 14:38:09.097896 1039263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:38:09.101844 1039263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 14:38:09.229339 1039263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 14:38:09.229360 1039263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 14:38:09.317591 1039263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 14:38:09.317625 1039263 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 14:38:09.370444 1039263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:38:09.370490 1039263 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 14:38:09.407869 1039263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:38:10.014739 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.014767 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.014873 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.014897 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.015112 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Closing plugin on server side
	I0729 14:38:10.015150 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.015157 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.015166 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.015174 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.015284 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.015297 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.015306 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.015313 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.015384 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.015413 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.015395 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Closing plugin on server side
	I0729 14:38:10.015611 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.015641 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.024010 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.024031 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.024299 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.024318 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.024343 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Closing plugin on server side
	I0729 14:38:10.233873 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.233903 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.234247 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Closing plugin on server side
	I0729 14:38:10.234260 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.234275 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.234292 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.234301 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.234546 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.234563 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.234574 1039263 addons.go:475] Verifying addon metrics-server=true in "embed-certs-668123"
	I0729 14:38:10.236215 1039263 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 14:38:10.237377 1039263 addons.go:510] duration metric: took 1.405942032s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 14:38:11.042263 1039263 node_ready.go:53] node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:12.129080 1039759 start.go:364] duration metric: took 3m18.14725367s to acquireMachinesLock for "old-k8s-version-360866"
	I0729 14:38:12.129155 1039759 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:38:12.129166 1039759 fix.go:54] fixHost starting: 
	I0729 14:38:12.129715 1039759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:12.129752 1039759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:12.146596 1039759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34517
	I0729 14:38:12.147101 1039759 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:12.147554 1039759 main.go:141] libmachine: Using API Version  1
	I0729 14:38:12.147581 1039759 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:12.147871 1039759 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:12.148094 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:12.148293 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetState
	I0729 14:38:12.149880 1039759 fix.go:112] recreateIfNeeded on old-k8s-version-360866: state=Stopped err=<nil>
	I0729 14:38:12.149918 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	W0729 14:38:12.150120 1039759 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:38:12.152003 1039759 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-360866" ...
	I0729 14:38:10.683699 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.684108 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Found IP for machine: 192.168.72.233
	I0729 14:38:10.684148 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has current primary IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.684161 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Reserving static IP address...
	I0729 14:38:10.684506 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-751306", mac: "52:54:00:9f:b9:23", ip: "192.168.72.233"} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.684540 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | skip adding static IP to network mk-default-k8s-diff-port-751306 - found existing host DHCP lease matching {name: "default-k8s-diff-port-751306", mac: "52:54:00:9f:b9:23", ip: "192.168.72.233"}
	I0729 14:38:10.684558 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Reserved static IP address: 192.168.72.233
	I0729 14:38:10.684581 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for SSH to be available...
	I0729 14:38:10.684600 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Getting to WaitForSSH function...
	I0729 14:38:10.686336 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.686684 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.686713 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.686825 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Using SSH client type: external
	I0729 14:38:10.686856 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa (-rw-------)
	I0729 14:38:10.686894 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.233 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:38:10.686904 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | About to run SSH command:
	I0729 14:38:10.686921 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | exit 0
	I0729 14:38:10.808536 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | SSH cmd err, output: <nil>: 
	I0729 14:38:10.808965 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetConfigRaw
	I0729 14:38:10.809613 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetIP
	I0729 14:38:10.812200 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.812590 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.812625 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.812862 1039440 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/config.json ...
	I0729 14:38:10.813089 1039440 machine.go:94] provisionDockerMachine start ...
	I0729 14:38:10.813110 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:10.813322 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:10.815607 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.815933 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.815962 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.816113 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:10.816287 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:10.816450 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:10.816623 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:10.816838 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:10.817167 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:10.817184 1039440 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:38:10.916864 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 14:38:10.916908 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetMachineName
	I0729 14:38:10.917215 1039440 buildroot.go:166] provisioning hostname "default-k8s-diff-port-751306"
	I0729 14:38:10.917249 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetMachineName
	I0729 14:38:10.917478 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:10.919961 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.920339 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.920363 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.920471 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:10.920660 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:10.920842 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:10.920991 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:10.921145 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:10.921358 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:10.921377 1039440 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-751306 && echo "default-k8s-diff-port-751306" | sudo tee /etc/hostname
	I0729 14:38:11.034826 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-751306
	
	I0729 14:38:11.034859 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.037494 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.037836 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.037870 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.038068 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:11.038274 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.038410 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.038575 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:11.038736 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:11.038971 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:11.038998 1039440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-751306' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-751306/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-751306' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:38:11.146350 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:38:11.146391 1039440 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:38:11.146449 1039440 buildroot.go:174] setting up certificates
	I0729 14:38:11.146463 1039440 provision.go:84] configureAuth start
	I0729 14:38:11.146478 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetMachineName
	I0729 14:38:11.146842 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetIP
	I0729 14:38:11.149492 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.149766 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.149796 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.149927 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.152449 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.152735 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.152785 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.152956 1039440 provision.go:143] copyHostCerts
	I0729 14:38:11.153010 1039440 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:38:11.153021 1039440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:38:11.153074 1039440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:38:11.153172 1039440 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:38:11.153180 1039440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:38:11.153198 1039440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:38:11.153253 1039440 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:38:11.153260 1039440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:38:11.153276 1039440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:38:11.153324 1039440 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-751306 san=[127.0.0.1 192.168.72.233 default-k8s-diff-port-751306 localhost minikube]
	I0729 14:38:11.489907 1039440 provision.go:177] copyRemoteCerts
	I0729 14:38:11.489990 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:38:11.490028 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.492487 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.492801 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.492832 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.492992 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:11.493220 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.493467 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:11.493611 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:38:11.574475 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:38:11.598182 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:38:11.622809 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 14:38:11.646533 1039440 provision.go:87] duration metric: took 500.054139ms to configureAuth
	I0729 14:38:11.646563 1039440 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:38:11.646742 1039440 config.go:182] Loaded profile config "default-k8s-diff-port-751306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:38:11.646822 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.649260 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.649581 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.649616 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.649729 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:11.649934 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.650088 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.650274 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:11.650436 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:11.650610 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:11.650628 1039440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:38:11.906877 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:38:11.906918 1039440 machine.go:97] duration metric: took 1.093811728s to provisionDockerMachine
	I0729 14:38:11.906936 1039440 start.go:293] postStartSetup for "default-k8s-diff-port-751306" (driver="kvm2")
	I0729 14:38:11.906951 1039440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:38:11.906977 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:11.907366 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:38:11.907407 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.910366 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.910725 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.910748 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.910913 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:11.911162 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.911323 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:11.911529 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:38:11.992133 1039440 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:38:11.996426 1039440 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:38:11.996456 1039440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:38:11.996544 1039440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:38:11.996641 1039440 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:38:11.996747 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:38:12.006165 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:12.029591 1039440 start.go:296] duration metric: took 122.613174ms for postStartSetup
	I0729 14:38:12.029643 1039440 fix.go:56] duration metric: took 18.376148792s for fixHost
	I0729 14:38:12.029670 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:12.032299 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.032667 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:12.032731 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.032901 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:12.033104 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:12.033260 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:12.033372 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:12.033510 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:12.033679 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:12.033688 1039440 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:38:12.128889 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263892.107886376
	
	I0729 14:38:12.128917 1039440 fix.go:216] guest clock: 1722263892.107886376
	I0729 14:38:12.128926 1039440 fix.go:229] Guest: 2024-07-29 14:38:12.107886376 +0000 UTC Remote: 2024-07-29 14:38:12.029648961 +0000 UTC m=+239.632909800 (delta=78.237415ms)
	I0729 14:38:12.128955 1039440 fix.go:200] guest clock delta is within tolerance: 78.237415ms
	I0729 14:38:12.128961 1039440 start.go:83] releasing machines lock for "default-k8s-diff-port-751306", held for 18.475504041s
	I0729 14:38:12.128995 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:12.129301 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetIP
	I0729 14:38:12.132025 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.132374 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:12.132401 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.132566 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:12.133087 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:12.133273 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:12.133349 1039440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:38:12.133404 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:12.133513 1039440 ssh_runner.go:195] Run: cat /version.json
	I0729 14:38:12.133534 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:12.136121 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.136149 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.136523 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:12.136577 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:12.136607 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.136624 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.136716 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:12.136793 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:12.136917 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:12.136973 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:12.137088 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:12.137165 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:12.137292 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:38:12.137232 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:38:12.233842 1039440 ssh_runner.go:195] Run: systemctl --version
	I0729 14:38:12.240082 1039440 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:38:12.388404 1039440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:38:12.395038 1039440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:38:12.395127 1039440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:38:12.416590 1039440 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:38:12.416618 1039440 start.go:495] detecting cgroup driver to use...
	I0729 14:38:12.416682 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:38:12.437863 1039440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:38:12.453458 1039440 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:38:12.453508 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:38:12.467657 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:38:12.482328 1039440 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:38:12.610786 1039440 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:38:12.774787 1039440 docker.go:233] disabling docker service ...
	I0729 14:38:12.774861 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:38:12.790091 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:38:12.803914 1039440 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:38:12.933894 1039440 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:38:13.052159 1039440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:38:13.069309 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:38:13.089959 1039440 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 14:38:13.090014 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.102668 1039440 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:38:13.102741 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.113634 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.124374 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.135488 1039440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:38:13.147171 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.159757 1039440 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.178620 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.189326 1039440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:38:13.200007 1039440 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:38:13.200067 1039440 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:38:13.213063 1039440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:38:13.226044 1039440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:13.360685 1039440 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:38:13.508473 1039440 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:38:13.508556 1039440 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:38:13.513547 1039440 start.go:563] Will wait 60s for crictl version
	I0729 14:38:13.513619 1039440 ssh_runner.go:195] Run: which crictl
	I0729 14:38:13.518528 1039440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:38:13.567103 1039440 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:38:13.567180 1039440 ssh_runner.go:195] Run: crio --version
	I0729 14:38:13.603837 1039440 ssh_runner.go:195] Run: crio --version
	I0729 14:38:13.633583 1039440 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 14:38:12.153214 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .Start
	I0729 14:38:12.153408 1039759 main.go:141] libmachine: (old-k8s-version-360866) Ensuring networks are active...
	I0729 14:38:12.154141 1039759 main.go:141] libmachine: (old-k8s-version-360866) Ensuring network default is active
	I0729 14:38:12.154590 1039759 main.go:141] libmachine: (old-k8s-version-360866) Ensuring network mk-old-k8s-version-360866 is active
	I0729 14:38:12.154970 1039759 main.go:141] libmachine: (old-k8s-version-360866) Getting domain xml...
	I0729 14:38:12.155733 1039759 main.go:141] libmachine: (old-k8s-version-360866) Creating domain...
	I0729 14:38:12.526504 1039759 main.go:141] libmachine: (old-k8s-version-360866) Waiting to get IP...
	I0729 14:38:12.527560 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:12.528068 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:12.528147 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:12.528048 1040622 retry.go:31] will retry after 240.079974ms: waiting for machine to come up
	I0729 14:38:12.769388 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:12.769881 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:12.769910 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:12.769829 1040622 retry.go:31] will retry after 271.200632ms: waiting for machine to come up
	I0729 14:38:13.042584 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:13.043069 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:13.043101 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:13.043049 1040622 retry.go:31] will retry after 464.725959ms: waiting for machine to come up
	I0729 14:38:13.509830 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:13.510400 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:13.510434 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:13.510350 1040622 retry.go:31] will retry after 416.316047ms: waiting for machine to come up
	I0729 14:38:13.042877 1039263 node_ready.go:53] node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:15.051282 1039263 node_ready.go:53] node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:13.635092 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetIP
	I0729 14:38:13.638202 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:13.638665 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:13.638691 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:13.638933 1039440 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 14:38:13.642960 1039440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:13.656098 1039440 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-751306 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-751306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.233 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:38:13.656208 1039440 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 14:38:13.656255 1039440 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:13.697398 1039440 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 14:38:13.697475 1039440 ssh_runner.go:195] Run: which lz4
	I0729 14:38:13.701632 1039440 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 14:38:13.707129 1039440 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 14:38:13.707162 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 14:38:15.218414 1039440 crio.go:462] duration metric: took 1.516807674s to copy over tarball
	I0729 14:38:15.218505 1039440 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 14:38:13.927885 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:13.928343 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:13.928373 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:13.928307 1040622 retry.go:31] will retry after 659.670364ms: waiting for machine to come up
	I0729 14:38:14.589644 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:14.590143 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:14.590172 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:14.590031 1040622 retry.go:31] will retry after 738.020335ms: waiting for machine to come up
	I0729 14:38:15.330093 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:15.330603 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:15.330633 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:15.330553 1040622 retry.go:31] will retry after 1.13067902s: waiting for machine to come up
	I0729 14:38:16.462554 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:16.463002 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:16.463031 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:16.462977 1040622 retry.go:31] will retry after 1.342785853s: waiting for machine to come up
	I0729 14:38:17.806889 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:17.807333 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:17.807365 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:17.807266 1040622 retry.go:31] will retry after 1.804812934s: waiting for machine to come up
	I0729 14:38:16.550848 1039263 node_ready.go:49] node "embed-certs-668123" has status "Ready":"True"
	I0729 14:38:16.550880 1039263 node_ready.go:38] duration metric: took 7.512808712s for node "embed-certs-668123" to be "Ready" ...
	I0729 14:38:16.550895 1039263 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:38:16.563220 1039263 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:16.570054 1039263 pod_ready.go:92] pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:16.570080 1039263 pod_ready.go:81] duration metric: took 6.832939ms for pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:16.570091 1039263 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:19.207981 1039263 pod_ready.go:102] pod "etcd-embed-certs-668123" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:17.498961 1039440 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.280415291s)
	I0729 14:38:17.498997 1039440 crio.go:469] duration metric: took 2.280548689s to extract the tarball
	I0729 14:38:17.499008 1039440 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 14:38:17.537972 1039440 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:17.583582 1039440 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 14:38:17.583609 1039440 cache_images.go:84] Images are preloaded, skipping loading
	I0729 14:38:17.583617 1039440 kubeadm.go:934] updating node { 192.168.72.233 8444 v1.30.3 crio true true} ...
	I0729 14:38:17.583719 1039440 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-751306 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-751306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:38:17.583789 1039440 ssh_runner.go:195] Run: crio config
	I0729 14:38:17.637202 1039440 cni.go:84] Creating CNI manager for ""
	I0729 14:38:17.637230 1039440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:38:17.637243 1039440 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:38:17.637272 1039440 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.233 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-751306 NodeName:default-k8s-diff-port-751306 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.233 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 14:38:17.637451 1039440 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.233
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-751306"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.233
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.233"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:38:17.637528 1039440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 14:38:17.650173 1039440 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:38:17.650259 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:38:17.661790 1039440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0729 14:38:17.680720 1039440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 14:38:17.700420 1039440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0729 14:38:17.723134 1039440 ssh_runner.go:195] Run: grep 192.168.72.233	control-plane.minikube.internal$ /etc/hosts
	I0729 14:38:17.727510 1039440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.233	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:17.741033 1039440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:17.889833 1039440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:38:17.910486 1039440 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306 for IP: 192.168.72.233
	I0729 14:38:17.910540 1039440 certs.go:194] generating shared ca certs ...
	I0729 14:38:17.910565 1039440 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:17.910763 1039440 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:38:17.910821 1039440 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:38:17.910833 1039440 certs.go:256] generating profile certs ...
	I0729 14:38:17.910941 1039440 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/client.key
	I0729 14:38:17.911003 1039440 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/apiserver.key.811a3f6d
	I0729 14:38:17.911105 1039440 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/proxy-client.key
	I0729 14:38:17.911271 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:38:17.911315 1039440 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:38:17.911329 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:38:17.911362 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:38:17.911393 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:38:17.911426 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:38:17.911478 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:17.912301 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:38:17.948102 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:38:17.984122 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:38:18.019932 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:38:18.062310 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 14:38:18.093176 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 14:38:18.124016 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:38:18.151933 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 14:38:18.179714 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:38:18.203414 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:38:18.233286 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:38:18.262871 1039440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:38:18.283064 1039440 ssh_runner.go:195] Run: openssl version
	I0729 14:38:18.289016 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:38:18.299409 1039440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:38:18.304053 1039440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:38:18.304115 1039440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:38:18.309976 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:38:18.321472 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:38:18.331916 1039440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:38:18.336822 1039440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:38:18.336881 1039440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:38:18.342762 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:38:18.353478 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:38:18.364299 1039440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:18.369024 1039440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:18.369076 1039440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:18.376534 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:38:18.387360 1039440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:38:18.392392 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:38:18.398520 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:38:18.404397 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:38:18.410922 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:38:18.417193 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:38:18.423808 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:38:18.433345 1039440 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-751306 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-751306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.233 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:38:18.433463 1039440 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:38:18.433582 1039440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:38:18.476749 1039440 cri.go:89] found id: ""
	I0729 14:38:18.476834 1039440 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:38:18.488548 1039440 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 14:38:18.488570 1039440 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 14:38:18.488628 1039440 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 14:38:18.499081 1039440 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:38:18.500064 1039440 kubeconfig.go:125] found "default-k8s-diff-port-751306" server: "https://192.168.72.233:8444"
	I0729 14:38:18.502130 1039440 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 14:38:18.511589 1039440 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.233
	I0729 14:38:18.511631 1039440 kubeadm.go:1160] stopping kube-system containers ...
	I0729 14:38:18.511646 1039440 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 14:38:18.511698 1039440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:38:18.559691 1039440 cri.go:89] found id: ""
	I0729 14:38:18.559779 1039440 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 14:38:18.576217 1039440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:38:18.586031 1039440 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:38:18.586057 1039440 kubeadm.go:157] found existing configuration files:
	
	I0729 14:38:18.586110 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 14:38:18.595032 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:38:18.595096 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:38:18.604320 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 14:38:18.613996 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:38:18.614053 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:38:18.623345 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 14:38:18.631898 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:38:18.631943 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:38:18.641303 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 14:38:18.649849 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:38:18.649907 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:38:18.659657 1039440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:38:18.668914 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:18.782351 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:19.902413 1039440 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.120025721s)
	I0729 14:38:19.902451 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:20.120455 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:20.206099 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:20.293738 1039440 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:38:20.293850 1039440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:20.794840 1039440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:21.294958 1039440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:21.313567 1039440 api_server.go:72] duration metric: took 1.019826572s to wait for apiserver process to appear ...
	I0729 14:38:21.313600 1039440 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:38:21.313625 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:21.314152 1039440 api_server.go:269] stopped: https://192.168.72.233:8444/healthz: Get "https://192.168.72.233:8444/healthz": dial tcp 192.168.72.233:8444: connect: connection refused
	I0729 14:38:21.813935 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:19.613474 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:19.613801 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:19.613830 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:19.613749 1040622 retry.go:31] will retry after 1.449593132s: waiting for machine to come up
	I0729 14:38:21.064774 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:21.065382 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:21.065405 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:21.065314 1040622 retry.go:31] will retry after 1.807508073s: waiting for machine to come up
	I0729 14:38:22.874485 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:22.874896 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:22.874925 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:22.874844 1040622 retry.go:31] will retry after 3.036719557s: waiting for machine to come up
	I0729 14:38:21.578125 1039263 pod_ready.go:92] pod "etcd-embed-certs-668123" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.578152 1039263 pod_ready.go:81] duration metric: took 5.008051755s for pod "etcd-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.578164 1039263 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.584521 1039263 pod_ready.go:92] pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.584544 1039263 pod_ready.go:81] duration metric: took 6.372252ms for pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.584558 1039263 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.590245 1039263 pod_ready.go:92] pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.590269 1039263 pod_ready.go:81] duration metric: took 5.702853ms for pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.590280 1039263 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2v79q" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.594576 1039263 pod_ready.go:92] pod "kube-proxy-2v79q" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.594602 1039263 pod_ready.go:81] duration metric: took 4.314692ms for pod "kube-proxy-2v79q" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.594614 1039263 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.787339 1039263 pod_ready.go:92] pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.787379 1039263 pod_ready.go:81] duration metric: took 192.756548ms for pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.787399 1039263 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:23.795588 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:24.561135 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:38:24.561176 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:38:24.561195 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:24.635519 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:24.635550 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:24.813755 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:24.817972 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:24.818003 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:25.314643 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:25.320059 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:25.320094 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:25.814758 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:25.820578 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:25.820613 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:26.314798 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:26.319346 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:26.319384 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:26.813907 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:26.821176 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:26.821208 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:27.314614 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:27.319335 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:27.319361 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:27.814188 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:27.819010 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 200:
	ok
	I0729 14:38:27.826057 1039440 api_server.go:141] control plane version: v1.30.3
	I0729 14:38:27.826082 1039440 api_server.go:131] duration metric: took 6.512474877s to wait for apiserver health ...
	I0729 14:38:27.826091 1039440 cni.go:84] Creating CNI manager for ""
	I0729 14:38:27.826098 1039440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:38:27.827698 1039440 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:38:25.913642 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:25.914139 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:25.914166 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:25.914099 1040622 retry.go:31] will retry after 3.839238383s: waiting for machine to come up
	I0729 14:38:26.293618 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:28.294115 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:30.296010 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:31.361688 1038758 start.go:364] duration metric: took 52.182622805s to acquireMachinesLock for "no-preload-603534"
	I0729 14:38:31.361756 1038758 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:38:31.361765 1038758 fix.go:54] fixHost starting: 
	I0729 14:38:31.362279 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:31.362319 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:31.380259 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34959
	I0729 14:38:31.380783 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:31.381320 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:38:31.381349 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:31.381649 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:31.381848 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:31.381989 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:38:31.383537 1038758 fix.go:112] recreateIfNeeded on no-preload-603534: state=Stopped err=<nil>
	I0729 14:38:31.383561 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	W0729 14:38:31.383739 1038758 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:38:31.385496 1038758 out.go:177] * Restarting existing kvm2 VM for "no-preload-603534" ...
	I0729 14:38:31.386878 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Start
	I0729 14:38:31.387071 1038758 main.go:141] libmachine: (no-preload-603534) Ensuring networks are active...
	I0729 14:38:31.387821 1038758 main.go:141] libmachine: (no-preload-603534) Ensuring network default is active
	I0729 14:38:31.388141 1038758 main.go:141] libmachine: (no-preload-603534) Ensuring network mk-no-preload-603534 is active
	I0729 14:38:31.388649 1038758 main.go:141] libmachine: (no-preload-603534) Getting domain xml...
	I0729 14:38:31.391807 1038758 main.go:141] libmachine: (no-preload-603534) Creating domain...
	I0729 14:38:27.829109 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:38:27.839810 1039440 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:38:27.858724 1039440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:38:27.868075 1039440 system_pods.go:59] 8 kube-system pods found
	I0729 14:38:27.868112 1039440 system_pods.go:61] "coredns-7db6d8ff4d-m6dlw" [7ce45b48-f04d-4167-8a6e-643b2fb3c4f0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 14:38:27.868121 1039440 system_pods.go:61] "etcd-default-k8s-diff-port-751306" [7ccadfd7-8b68-45c0-9670-af97b90d35d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 14:38:27.868128 1039440 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-751306" [5e8c8e17-28db-499c-a940-e67d92b28bfd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 14:38:27.868134 1039440 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-751306" [a2d31d58-d8d9-4070-96af-0d1af763d0b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 14:38:27.868140 1039440 system_pods.go:61] "kube-proxy-p6dv5" [c44edf0a-f608-49f2-ab53-7ffbcdf13b5e] Running
	I0729 14:38:27.868146 1039440 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-751306" [b87ee044-f43f-4aa7-94b3-4f2ad2213ce9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 14:38:27.868152 1039440 system_pods.go:61] "metrics-server-569cc877fc-gmz64" [296e883c-7394-4004-a25f-e93b4be52d46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:38:27.868156 1039440 system_pods.go:61] "storage-provisioner" [ec3b78f1-96a3-47b2-958d-82258a074634] Running
	I0729 14:38:27.868165 1039440 system_pods.go:74] duration metric: took 9.405484ms to wait for pod list to return data ...
	I0729 14:38:27.868173 1039440 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:38:27.871538 1039440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:38:27.871563 1039440 node_conditions.go:123] node cpu capacity is 2
	I0729 14:38:27.871575 1039440 node_conditions.go:105] duration metric: took 3.397306ms to run NodePressure ...
	I0729 14:38:27.871596 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:28.143890 1039440 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 14:38:28.148855 1039440 kubeadm.go:739] kubelet initialised
	I0729 14:38:28.148880 1039440 kubeadm.go:740] duration metric: took 4.952057ms waiting for restarted kubelet to initialise ...
	I0729 14:38:28.148891 1039440 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:38:28.154636 1039440 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-m6dlw" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:30.161265 1039440 pod_ready.go:102] pod "coredns-7db6d8ff4d-m6dlw" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:31.161979 1039440 pod_ready.go:92] pod "coredns-7db6d8ff4d-m6dlw" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:31.162005 1039440 pod_ready.go:81] duration metric: took 3.007344998s for pod "coredns-7db6d8ff4d-m6dlw" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:31.162015 1039440 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:29.755060 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.755512 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has current primary IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.755524 1039759 main.go:141] libmachine: (old-k8s-version-360866) Found IP for machine: 192.168.39.71
	I0729 14:38:29.755536 1039759 main.go:141] libmachine: (old-k8s-version-360866) Reserving static IP address...
	I0729 14:38:29.755975 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "old-k8s-version-360866", mac: "52:54:00:18:de:25", ip: "192.168.39.71"} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.756008 1039759 main.go:141] libmachine: (old-k8s-version-360866) Reserved static IP address: 192.168.39.71
	I0729 14:38:29.756035 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | skip adding static IP to network mk-old-k8s-version-360866 - found existing host DHCP lease matching {name: "old-k8s-version-360866", mac: "52:54:00:18:de:25", ip: "192.168.39.71"}
	I0729 14:38:29.756048 1039759 main.go:141] libmachine: (old-k8s-version-360866) Waiting for SSH to be available...
	I0729 14:38:29.756067 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | Getting to WaitForSSH function...
	I0729 14:38:29.758527 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.758899 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.758944 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.759003 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | Using SSH client type: external
	I0729 14:38:29.759024 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa (-rw-------)
	I0729 14:38:29.759058 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.71 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:38:29.759070 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | About to run SSH command:
	I0729 14:38:29.759083 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | exit 0
	I0729 14:38:29.884425 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | SSH cmd err, output: <nil>: 
	I0729 14:38:29.884833 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetConfigRaw
	I0729 14:38:29.885450 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:29.887929 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.888241 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.888294 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.888624 1039759 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/config.json ...
	I0729 14:38:29.888895 1039759 machine.go:94] provisionDockerMachine start ...
	I0729 14:38:29.888919 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:29.889221 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:29.891654 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.892013 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.892038 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.892163 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:29.892350 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.892598 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.892764 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:29.892968 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:29.893158 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:29.893169 1039759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:38:29.993529 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 14:38:29.993564 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetMachineName
	I0729 14:38:29.993859 1039759 buildroot.go:166] provisioning hostname "old-k8s-version-360866"
	I0729 14:38:29.993893 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetMachineName
	I0729 14:38:29.994074 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:29.996882 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.997279 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.997308 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.997537 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:29.997699 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.997856 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.997976 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:29.998206 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:29.998412 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:29.998429 1039759 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-360866 && echo "old-k8s-version-360866" | sudo tee /etc/hostname
	I0729 14:38:30.115298 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-360866
	
	I0729 14:38:30.115331 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.118349 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.118763 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.118793 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.119029 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:30.119203 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.119356 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.119561 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:30.119772 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:30.119976 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:30.120019 1039759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-360866' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-360866/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-360866' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:38:30.229987 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:38:30.230017 1039759 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:38:30.230059 1039759 buildroot.go:174] setting up certificates
	I0729 14:38:30.230070 1039759 provision.go:84] configureAuth start
	I0729 14:38:30.230090 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetMachineName
	I0729 14:38:30.230436 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:30.233150 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.233501 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.233533 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.233719 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.236157 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.236494 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.236534 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.236713 1039759 provision.go:143] copyHostCerts
	I0729 14:38:30.236786 1039759 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:38:30.236797 1039759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:38:30.236856 1039759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:38:30.236976 1039759 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:38:30.236986 1039759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:38:30.237006 1039759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:38:30.237071 1039759 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:38:30.237078 1039759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:38:30.237095 1039759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:38:30.237153 1039759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-360866 san=[127.0.0.1 192.168.39.71 localhost minikube old-k8s-version-360866]
	I0729 14:38:30.680859 1039759 provision.go:177] copyRemoteCerts
	I0729 14:38:30.680933 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:38:30.680970 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.683890 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.684229 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.684262 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.684430 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:30.684634 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.684822 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:30.684973 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:30.770659 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:38:30.799011 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 14:38:30.825536 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:38:30.850751 1039759 provision.go:87] duration metric: took 620.664228ms to configureAuth
	I0729 14:38:30.850795 1039759 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:38:30.850998 1039759 config.go:182] Loaded profile config "old-k8s-version-360866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 14:38:30.851072 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.853735 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.854065 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.854102 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.854197 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:30.854408 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.854559 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.854717 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:30.854961 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:30.855169 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:30.855187 1039759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:38:31.119354 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:38:31.119386 1039759 machine.go:97] duration metric: took 1.230472142s to provisionDockerMachine
	I0729 14:38:31.119401 1039759 start.go:293] postStartSetup for "old-k8s-version-360866" (driver="kvm2")
	I0729 14:38:31.119415 1039759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:38:31.119456 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.119885 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:38:31.119926 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.123196 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.123576 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.123607 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.123826 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.124053 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.124276 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.124469 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:31.208607 1039759 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:38:31.213173 1039759 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:38:31.213206 1039759 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:38:31.213268 1039759 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:38:31.213352 1039759 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:38:31.213454 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:38:31.225256 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:31.253156 1039759 start.go:296] duration metric: took 133.735669ms for postStartSetup
	I0729 14:38:31.253208 1039759 fix.go:56] duration metric: took 19.124042428s for fixHost
	I0729 14:38:31.253237 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.256005 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.256340 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.256375 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.256535 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.256732 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.256927 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.257075 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.257272 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:31.257445 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:31.257455 1039759 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:38:31.361488 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263911.340365932
	
	I0729 14:38:31.361512 1039759 fix.go:216] guest clock: 1722263911.340365932
	I0729 14:38:31.361519 1039759 fix.go:229] Guest: 2024-07-29 14:38:31.340365932 +0000 UTC Remote: 2024-07-29 14:38:31.253213714 +0000 UTC m=+217.413183116 (delta=87.152218ms)
	I0729 14:38:31.361572 1039759 fix.go:200] guest clock delta is within tolerance: 87.152218ms
	I0729 14:38:31.361583 1039759 start.go:83] releasing machines lock for "old-k8s-version-360866", held for 19.232453759s
	I0729 14:38:31.361611 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.361921 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:31.364981 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.365412 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.365441 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.365648 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.366227 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.366482 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.366583 1039759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:38:31.366644 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.366761 1039759 ssh_runner.go:195] Run: cat /version.json
	I0729 14:38:31.366797 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.369658 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.369699 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.370051 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.370081 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.370105 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.370125 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.370309 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.370325 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.370567 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.370568 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.370773 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.370809 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.370958 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:31.370957 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:31.472108 1039759 ssh_runner.go:195] Run: systemctl --version
	I0729 14:38:31.478939 1039759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:38:31.630720 1039759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:38:31.637768 1039759 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:38:31.637874 1039759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:38:31.655476 1039759 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:38:31.655504 1039759 start.go:495] detecting cgroup driver to use...
	I0729 14:38:31.655584 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:38:31.679387 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:38:31.704260 1039759 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:38:31.704318 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:38:31.727875 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:38:31.743197 1039759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:38:31.867502 1039759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:38:32.035088 1039759 docker.go:233] disabling docker service ...
	I0729 14:38:32.035169 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:38:32.050118 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:38:32.064828 1039759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:38:32.202938 1039759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:38:32.333330 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:38:32.348845 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:38:32.369848 1039759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 14:38:32.369923 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.381787 1039759 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:38:32.381893 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.394331 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.405323 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.417259 1039759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:38:32.428997 1039759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:38:32.440934 1039759 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:38:32.441003 1039759 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:38:32.454949 1039759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:38:32.466042 1039759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:32.596308 1039759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:38:32.762548 1039759 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:38:32.762632 1039759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:38:32.768336 1039759 start.go:563] Will wait 60s for crictl version
	I0729 14:38:32.768447 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:32.772850 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:38:32.829827 1039759 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:38:32.829936 1039759 ssh_runner.go:195] Run: crio --version
	I0729 14:38:32.863269 1039759 ssh_runner.go:195] Run: crio --version
	I0729 14:38:32.897768 1039759 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 14:38:32.899209 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:32.902257 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:32.902649 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:32.902680 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:32.902928 1039759 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 14:38:32.908590 1039759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:32.921952 1039759 kubeadm.go:883] updating cluster {Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:38:32.922094 1039759 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 14:38:32.922141 1039759 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:32.969932 1039759 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 14:38:32.970003 1039759 ssh_runner.go:195] Run: which lz4
	I0729 14:38:32.974564 1039759 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 14:38:32.980128 1039759 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 14:38:32.980173 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 14:38:32.795590 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:35.295541 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:31.750580 1038758 main.go:141] libmachine: (no-preload-603534) Waiting to get IP...
	I0729 14:38:31.751732 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:31.752236 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:31.752340 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:31.752236 1040763 retry.go:31] will retry after 239.008836ms: waiting for machine to come up
	I0729 14:38:31.993011 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:31.993538 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:31.993569 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:31.993481 1040763 retry.go:31] will retry after 288.863538ms: waiting for machine to come up
	I0729 14:38:32.284306 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:32.284941 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:32.284980 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:32.284867 1040763 retry.go:31] will retry after 410.903425ms: waiting for machine to come up
	I0729 14:38:32.697686 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:32.698291 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:32.698322 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:32.698227 1040763 retry.go:31] will retry after 423.090324ms: waiting for machine to come up
	I0729 14:38:33.122914 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:33.123550 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:33.123579 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:33.123500 1040763 retry.go:31] will retry after 744.030348ms: waiting for machine to come up
	I0729 14:38:33.869849 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:33.870499 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:33.870523 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:33.870456 1040763 retry.go:31] will retry after 888.516658ms: waiting for machine to come up
	I0729 14:38:34.760145 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:34.760594 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:34.760627 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:34.760534 1040763 retry.go:31] will retry after 889.371631ms: waiting for machine to come up
	I0729 14:38:35.651169 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:35.651700 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:35.651731 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:35.651636 1040763 retry.go:31] will retry after 1.200333492s: waiting for machine to come up
	I0729 14:38:33.181695 1039440 pod_ready.go:102] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:35.672201 1039440 pod_ready.go:102] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:34.707140 1039759 crio.go:462] duration metric: took 1.732619622s to copy over tarball
	I0729 14:38:34.707232 1039759 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 14:38:37.740076 1039759 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.032804006s)
	I0729 14:38:37.740105 1039759 crio.go:469] duration metric: took 3.032930405s to extract the tarball
	I0729 14:38:37.740113 1039759 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 14:38:37.786934 1039759 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:37.827451 1039759 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 14:38:37.827484 1039759 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 14:38:37.827576 1039759 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:37.827606 1039759 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:37.827624 1039759 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 14:38:37.827678 1039759 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:37.827702 1039759 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:37.827607 1039759 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:37.827683 1039759 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 14:38:37.827678 1039759 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:37.829621 1039759 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:37.829709 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:37.829714 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:37.829714 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:37.829724 1039759 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 14:38:37.829628 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:37.829808 1039759 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 14:38:37.829625 1039759 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:38.113249 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:38.373433 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:38.378382 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:38.380909 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:38.382431 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 14:38:38.391678 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:38.392565 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:38.419739 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 14:38:38.491174 1039759 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 14:38:38.491255 1039759 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:38.491320 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.570681 1039759 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 14:38:38.570784 1039759 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 14:38:38.570832 1039759 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 14:38:38.570889 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.570792 1039759 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:38.570721 1039759 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 14:38:38.570966 1039759 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:38.570977 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.570992 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.576687 1039759 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 14:38:38.576728 1039759 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:38.576769 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.587650 1039759 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 14:38:38.587699 1039759 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:38.587701 1039759 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 14:38:38.587738 1039759 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 14:38:38.587753 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.587791 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.587866 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:38.587883 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 14:38:38.587913 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:38.587948 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:38.591209 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:38.599567 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:38.610869 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 14:38:38.742939 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 14:38:38.742974 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 14:38:38.743091 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 14:38:38.743098 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 14:38:38.745789 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 14:38:38.745857 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 14:38:38.753643 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 14:38:38.753704 1039759 cache_images.go:92] duration metric: took 926.203812ms to LoadCachedImages
	W0729 14:38:38.753790 1039759 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0729 14:38:38.753804 1039759 kubeadm.go:934] updating node { 192.168.39.71 8443 v1.20.0 crio true true} ...
	I0729 14:38:38.753931 1039759 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-360866 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:38:38.753992 1039759 ssh_runner.go:195] Run: crio config
	I0729 14:38:38.802220 1039759 cni.go:84] Creating CNI manager for ""
	I0729 14:38:38.802246 1039759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:38:38.802258 1039759 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:38:38.802285 1039759 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.71 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-360866 NodeName:old-k8s-version-360866 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.71"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.71 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 14:38:38.802487 1039759 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.71
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-360866"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.71
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.71"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:38:38.802591 1039759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 14:38:38.816832 1039759 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:38:38.816934 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:38:38.827468 1039759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 14:38:38.847125 1039759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 14:38:38.865302 1039759 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 14:38:37.795799 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:40.294979 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:36.853388 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:36.853944 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:36.853979 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:36.853881 1040763 retry.go:31] will retry after 1.750535475s: waiting for machine to come up
	I0729 14:38:38.605644 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:38.606135 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:38.606185 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:38.606079 1040763 retry.go:31] will retry after 2.245294623s: waiting for machine to come up
	I0729 14:38:40.853761 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:40.854277 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:40.854311 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:40.854214 1040763 retry.go:31] will retry after 1.864975071s: waiting for machine to come up
	I0729 14:38:38.299326 1039440 pod_ready.go:102] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:39.170692 1039440 pod_ready.go:92] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:39.170720 1039440 pod_ready.go:81] duration metric: took 8.008696752s for pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:39.170735 1039440 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:39.177419 1039440 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:39.177449 1039440 pod_ready.go:81] duration metric: took 6.705958ms for pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:39.177463 1039440 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.185538 1039440 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:41.185566 1039440 pod_ready.go:81] duration metric: took 2.008093791s for pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.185580 1039440 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-p6dv5" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.193833 1039440 pod_ready.go:92] pod "kube-proxy-p6dv5" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:41.193864 1039440 pod_ready.go:81] duration metric: took 8.275486ms for pod "kube-proxy-p6dv5" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.193878 1039440 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.200931 1039440 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:41.200963 1039440 pod_ready.go:81] duration metric: took 7.075212ms for pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.200978 1039440 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:38.884267 1039759 ssh_runner.go:195] Run: grep 192.168.39.71	control-plane.minikube.internal$ /etc/hosts
	I0729 14:38:38.889206 1039759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.71	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:38.905643 1039759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:39.032065 1039759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:38:39.051892 1039759 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866 for IP: 192.168.39.71
	I0729 14:38:39.051991 1039759 certs.go:194] generating shared ca certs ...
	I0729 14:38:39.052019 1039759 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:39.052203 1039759 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:38:39.052258 1039759 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:38:39.052270 1039759 certs.go:256] generating profile certs ...
	I0729 14:38:39.091359 1039759 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/client.key
	I0729 14:38:39.091485 1039759 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.key.98c2aed0
	I0729 14:38:39.091554 1039759 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.key
	I0729 14:38:39.091718 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:38:39.091763 1039759 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:38:39.091776 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:38:39.091804 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:38:39.091835 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:38:39.091867 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:38:39.091924 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:39.092850 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:38:39.125528 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:38:39.153093 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:38:39.181324 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:38:39.235516 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 14:38:39.262599 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 14:38:39.293085 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:38:39.326318 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 14:38:39.361548 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:38:39.386876 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:38:39.412529 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:38:39.438418 1039759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:38:39.459519 1039759 ssh_runner.go:195] Run: openssl version
	I0729 14:38:39.466109 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:38:39.477941 1039759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:39.482748 1039759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:39.482820 1039759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:39.489099 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:38:39.500207 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:38:39.511513 1039759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:38:39.516125 1039759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:38:39.516183 1039759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:38:39.522297 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:38:39.533536 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:38:39.544996 1039759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:38:39.549681 1039759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:38:39.549733 1039759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:38:39.556332 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:38:39.571393 1039759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:38:39.578420 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:38:39.586316 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:38:39.593450 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:38:39.600604 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:38:39.607483 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:38:39.614692 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:38:39.621776 1039759 kubeadm.go:392] StartCluster: {Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:38:39.621893 1039759 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:38:39.621955 1039759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:38:39.673544 1039759 cri.go:89] found id: ""
	I0729 14:38:39.673634 1039759 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:38:39.687887 1039759 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 14:38:39.687912 1039759 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 14:38:39.687963 1039759 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 14:38:39.701616 1039759 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:38:39.702914 1039759 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-360866" does not appear in /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:38:39.703576 1039759 kubeconfig.go:62] /home/jenkins/minikube-integration/19338-974764/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-360866" cluster setting kubeconfig missing "old-k8s-version-360866" context setting]
	I0729 14:38:39.704951 1039759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:39.715056 1039759 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 14:38:39.728384 1039759 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.71
	I0729 14:38:39.728448 1039759 kubeadm.go:1160] stopping kube-system containers ...
	I0729 14:38:39.728466 1039759 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 14:38:39.728534 1039759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:38:39.778476 1039759 cri.go:89] found id: ""
	I0729 14:38:39.778561 1039759 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 14:38:39.800712 1039759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:38:39.813243 1039759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:38:39.813265 1039759 kubeadm.go:157] found existing configuration files:
	
	I0729 14:38:39.813323 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:38:39.824822 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:38:39.824897 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:38:39.834966 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:38:39.847660 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:38:39.847887 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:38:39.861117 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:38:39.873868 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:38:39.873936 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:38:39.884195 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:38:39.895155 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:38:39.895234 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:38:39.909138 1039759 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:38:39.920721 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:40.055932 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.173909 1039759 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.117933178s)
	I0729 14:38:41.173947 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.419684 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.550852 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.655941 1039759 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:38:41.656040 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:42.156080 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:42.656948 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:43.157127 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:43.656087 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:42.794217 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:45.293634 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:42.720182 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:42.720674 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:42.720701 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:42.720614 1040763 retry.go:31] will retry after 2.929394717s: waiting for machine to come up
	I0729 14:38:45.653508 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:45.654044 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:45.654069 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:45.653993 1040763 retry.go:31] will retry after 4.133064498s: waiting for machine to come up
	I0729 14:38:43.208287 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:45.706607 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:44.156583 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:44.657199 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:45.156268 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:45.656786 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:46.156393 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:46.656151 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:47.156507 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:47.656922 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:48.156840 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:48.656756 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:47.294322 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:49.795189 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:49.789721 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.790248 1038758 main.go:141] libmachine: (no-preload-603534) Found IP for machine: 192.168.61.116
	I0729 14:38:49.790272 1038758 main.go:141] libmachine: (no-preload-603534) Reserving static IP address...
	I0729 14:38:49.790290 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has current primary IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.790823 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "no-preload-603534", mac: "52:54:00:bf:94:45", ip: "192.168.61.116"} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:49.790860 1038758 main.go:141] libmachine: (no-preload-603534) Reserved static IP address: 192.168.61.116
	I0729 14:38:49.790883 1038758 main.go:141] libmachine: (no-preload-603534) DBG | skip adding static IP to network mk-no-preload-603534 - found existing host DHCP lease matching {name: "no-preload-603534", mac: "52:54:00:bf:94:45", ip: "192.168.61.116"}
	I0729 14:38:49.790920 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Getting to WaitForSSH function...
	I0729 14:38:49.790937 1038758 main.go:141] libmachine: (no-preload-603534) Waiting for SSH to be available...
	I0729 14:38:49.793243 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.793646 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:49.793679 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.793820 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Using SSH client type: external
	I0729 14:38:49.793850 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa (-rw-------)
	I0729 14:38:49.793884 1038758 main.go:141] libmachine: (no-preload-603534) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:38:49.793899 1038758 main.go:141] libmachine: (no-preload-603534) DBG | About to run SSH command:
	I0729 14:38:49.793961 1038758 main.go:141] libmachine: (no-preload-603534) DBG | exit 0
	I0729 14:38:49.924827 1038758 main.go:141] libmachine: (no-preload-603534) DBG | SSH cmd err, output: <nil>: 
	I0729 14:38:49.925188 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetConfigRaw
	I0729 14:38:49.925835 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetIP
	I0729 14:38:49.928349 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.928799 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:49.928830 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.929091 1038758 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/config.json ...
	I0729 14:38:49.929313 1038758 machine.go:94] provisionDockerMachine start ...
	I0729 14:38:49.929334 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:49.929556 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:49.932040 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.932431 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:49.932473 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.932629 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:49.932798 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:49.932930 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:49.933033 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:49.933142 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:49.933313 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:49.933324 1038758 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:38:50.049016 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 14:38:50.049059 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:38:50.049328 1038758 buildroot.go:166] provisioning hostname "no-preload-603534"
	I0729 14:38:50.049354 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:38:50.049566 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.052138 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.052532 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.052561 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.052736 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.052918 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.053093 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.053269 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.053462 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:50.053641 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:50.053653 1038758 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-603534 && echo "no-preload-603534" | sudo tee /etc/hostname
	I0729 14:38:50.189302 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-603534
	
	I0729 14:38:50.189341 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.192559 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.192949 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.192974 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.193248 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.193476 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.193689 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.193870 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.194082 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:50.194305 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:50.194329 1038758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-603534' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-603534/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-603534' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:38:50.322506 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:38:50.322540 1038758 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:38:50.322564 1038758 buildroot.go:174] setting up certificates
	I0729 14:38:50.322577 1038758 provision.go:84] configureAuth start
	I0729 14:38:50.322589 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:38:50.322938 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetIP
	I0729 14:38:50.325594 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.325957 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.325994 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.326139 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.328455 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.328803 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.328828 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.328950 1038758 provision.go:143] copyHostCerts
	I0729 14:38:50.329015 1038758 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:38:50.329025 1038758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:38:50.329078 1038758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:38:50.329165 1038758 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:38:50.329173 1038758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:38:50.329192 1038758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:38:50.329243 1038758 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:38:50.329249 1038758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:38:50.329264 1038758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:38:50.329310 1038758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.no-preload-603534 san=[127.0.0.1 192.168.61.116 localhost minikube no-preload-603534]
	I0729 14:38:50.447706 1038758 provision.go:177] copyRemoteCerts
	I0729 14:38:50.447777 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:38:50.447810 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.450714 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.451106 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.451125 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.451444 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.451679 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.451855 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.451975 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:38:50.539025 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:38:50.567887 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:38:50.594581 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 14:38:50.619475 1038758 provision.go:87] duration metric: took 296.880769ms to configureAuth
	I0729 14:38:50.619509 1038758 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:38:50.619708 1038758 config.go:182] Loaded profile config "no-preload-603534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 14:38:50.619797 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.622753 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.623121 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.623151 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.623331 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.623519 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.623684 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.623813 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.623971 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:50.624151 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:50.624168 1038758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:38:50.895618 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:38:50.895649 1038758 machine.go:97] duration metric: took 966.320375ms to provisionDockerMachine
	I0729 14:38:50.895662 1038758 start.go:293] postStartSetup for "no-preload-603534" (driver="kvm2")
	I0729 14:38:50.895684 1038758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:38:50.895717 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:50.896084 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:38:50.896112 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.899586 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.899998 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.900031 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.900168 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.900424 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.900622 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.900799 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:38:50.987195 1038758 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:38:50.991924 1038758 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:38:50.991952 1038758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:38:50.992025 1038758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:38:50.992111 1038758 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:38:50.992208 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:38:51.002048 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:51.029714 1038758 start.go:296] duration metric: took 134.037621ms for postStartSetup
	I0729 14:38:51.029758 1038758 fix.go:56] duration metric: took 19.66799406s for fixHost
	I0729 14:38:51.029782 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:51.032495 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.032819 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:51.032844 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.033049 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:51.033236 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:51.033377 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:51.033587 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:51.033767 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:51.034007 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:51.034021 1038758 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:38:51.149481 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263931.130931233
	
	I0729 14:38:51.149510 1038758 fix.go:216] guest clock: 1722263931.130931233
	I0729 14:38:51.149520 1038758 fix.go:229] Guest: 2024-07-29 14:38:51.130931233 +0000 UTC Remote: 2024-07-29 14:38:51.029761931 +0000 UTC m=+354.409484230 (delta=101.169302ms)
	I0729 14:38:51.149575 1038758 fix.go:200] guest clock delta is within tolerance: 101.169302ms
	I0729 14:38:51.149583 1038758 start.go:83] releasing machines lock for "no-preload-603534", held for 19.787859214s
	I0729 14:38:51.149617 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:51.149923 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetIP
	I0729 14:38:51.152671 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.153054 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:51.153081 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.153298 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:51.153898 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:51.154092 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:51.154192 1038758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:38:51.154245 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:51.154349 1038758 ssh_runner.go:195] Run: cat /version.json
	I0729 14:38:51.154378 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:51.157173 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.157200 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.157560 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:51.157592 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.157635 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:51.157654 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.157955 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:51.157976 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:51.158169 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:51.158195 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:51.158370 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:51.158381 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:51.158565 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:38:51.158572 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:38:51.260806 1038758 ssh_runner.go:195] Run: systemctl --version
	I0729 14:38:51.266847 1038758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:38:51.412637 1038758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:38:51.418879 1038758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:38:51.418954 1038758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:38:51.435946 1038758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:38:51.435978 1038758 start.go:495] detecting cgroup driver to use...
	I0729 14:38:51.436061 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:38:51.457517 1038758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:38:51.472718 1038758 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:38:51.472811 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:38:51.487062 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:38:51.501410 1038758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:38:51.617292 1038758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:38:47.708063 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:49.708506 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:52.209337 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:51.764302 1038758 docker.go:233] disabling docker service ...
	I0729 14:38:51.764386 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:38:51.779137 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:38:51.794372 1038758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:38:51.930402 1038758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:38:52.062691 1038758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:38:52.076796 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:38:52.095912 1038758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 14:38:52.095994 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.107507 1038758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:38:52.107588 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.119470 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.131252 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.141672 1038758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:38:52.152086 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.163682 1038758 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.189614 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.200279 1038758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:38:52.211878 1038758 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:38:52.211943 1038758 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:38:52.224909 1038758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:38:52.234312 1038758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:52.357370 1038758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:38:52.492520 1038758 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:38:52.492622 1038758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:38:52.497537 1038758 start.go:563] Will wait 60s for crictl version
	I0729 14:38:52.497595 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:52.501292 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:38:52.544320 1038758 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:38:52.544428 1038758 ssh_runner.go:195] Run: crio --version
	I0729 14:38:52.575452 1038758 ssh_runner.go:195] Run: crio --version
	I0729 14:38:52.605920 1038758 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 14:38:49.156539 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:49.656397 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:50.156909 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:50.656968 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:51.156321 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:51.656183 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:52.157099 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:52.656725 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:53.157009 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:53.656787 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:51.796331 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:53.799083 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:52.607410 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetIP
	I0729 14:38:52.610017 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:52.610296 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:52.610330 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:52.610553 1038758 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 14:38:52.614659 1038758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:52.626967 1038758 kubeadm.go:883] updating cluster {Name:no-preload-603534 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-603534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:38:52.627087 1038758 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 14:38:52.627124 1038758 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:52.662824 1038758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 14:38:52.662852 1038758 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 14:38:52.662901 1038758 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:52.662968 1038758 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:52.663040 1038758 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 14:38:52.663043 1038758 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:52.663066 1038758 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:52.662987 1038758 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:52.662987 1038758 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:52.663017 1038758 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:52.664360 1038758 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 14:38:52.664947 1038758 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:52.664965 1038758 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:52.664954 1038758 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:52.665015 1038758 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:52.665023 1038758 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:52.665351 1038758 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:52.665423 1038758 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:52.829143 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:52.829158 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:52.829541 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:52.851797 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:52.866728 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 14:38:52.884604 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:52.893636 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:52.946087 1038758 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 14:38:52.946134 1038758 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 14:38:52.946160 1038758 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:52.946170 1038758 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:52.946173 1038758 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 14:38:52.946192 1038758 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:52.946216 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:52.946221 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:52.946217 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:52.954361 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:53.001715 1038758 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 14:38:53.001766 1038758 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:53.001826 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:53.106651 1038758 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 14:38:53.106713 1038758 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:53.106770 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:53.106838 1038758 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 14:38:53.106883 1038758 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:53.106921 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:53.106927 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:53.106962 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:53.107012 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:53.107038 1038758 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 14:38:53.107067 1038758 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:53.107079 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:53.107092 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:53.131562 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:53.212076 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:53.212199 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 14:38:53.212272 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 14:38:53.214338 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 14:38:53.214430 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 14:38:53.216771 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:53.216941 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 14:38:53.217037 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 14:38:53.220214 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 14:38:53.220306 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 14:38:53.272021 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 14:38:53.272140 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 14:38:53.275939 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 14:38:53.275988 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 14:38:53.276008 1038758 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 14:38:53.276009 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 14:38:53.276029 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 14:38:53.276054 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 14:38:53.301528 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 14:38:53.301578 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 14:38:53.301600 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 14:38:53.301647 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 14:38:53.301759 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 14:38:55.357295 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.08120738s)
	I0729 14:38:55.357329 1038758 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.081270007s)
	I0729 14:38:55.357371 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 14:38:55.357338 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 14:38:55.357384 1038758 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.055605102s)
	I0729 14:38:55.357406 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 14:38:55.357407 1038758 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 14:38:55.357464 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 14:38:54.708330 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:57.207468 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:54.156921 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:54.656957 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:55.156201 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:55.656783 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:56.156180 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:56.656984 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:57.156610 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:57.656127 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:58.156785 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:58.656192 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:56.295143 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:58.795511 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:57.217512 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.860011805s)
	I0729 14:38:57.217539 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 14:38:57.217570 1038758 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 14:38:57.217634 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 14:38:59.187398 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.969733063s)
	I0729 14:38:59.187443 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 14:38:59.187482 1038758 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 14:38:59.187562 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 14:39:01.138568 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.950970137s)
	I0729 14:39:01.138617 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 14:39:01.138654 1038758 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 14:39:01.138740 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 14:38:59.207657 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:01.208795 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:59.156740 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:59.656223 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:00.156726 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:00.656593 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:01.156115 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:01.656364 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:02.157069 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:02.656491 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:03.156938 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:03.656898 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:01.293858 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:03.484613 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:05.793953 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:04.231830 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.093043665s)
	I0729 14:39:04.231866 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 14:39:04.231897 1038758 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 14:39:04.231963 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 14:39:05.182458 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 14:39:05.182512 1038758 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 14:39:05.182566 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 14:39:03.209198 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:05.707557 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:04.157177 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:04.656505 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:05.156530 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:05.656389 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:06.156606 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:06.657121 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:07.157048 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:07.656497 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:08.156327 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:08.656868 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:07.794522 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:09.794886 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:07.253615 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.070972791s)
	I0729 14:39:07.253665 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 14:39:07.253700 1038758 cache_images.go:123] Successfully loaded all cached images
	I0729 14:39:07.253707 1038758 cache_images.go:92] duration metric: took 14.590842072s to LoadCachedImages
	I0729 14:39:07.253720 1038758 kubeadm.go:934] updating node { 192.168.61.116 8443 v1.31.0-beta.0 crio true true} ...
	I0729 14:39:07.253899 1038758 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-603534 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-603534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:39:07.253980 1038758 ssh_runner.go:195] Run: crio config
	I0729 14:39:07.309694 1038758 cni.go:84] Creating CNI manager for ""
	I0729 14:39:07.309720 1038758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:39:07.309731 1038758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:39:07.309754 1038758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-603534 NodeName:no-preload-603534 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 14:39:07.309916 1038758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-603534"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:39:07.309985 1038758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 14:39:07.321876 1038758 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:39:07.321967 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:39:07.333057 1038758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0729 14:39:07.350193 1038758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 14:39:07.367171 1038758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0729 14:39:07.384123 1038758 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0729 14:39:07.387896 1038758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:39:07.400317 1038758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:39:07.525822 1038758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:39:07.545142 1038758 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534 for IP: 192.168.61.116
	I0729 14:39:07.545167 1038758 certs.go:194] generating shared ca certs ...
	I0729 14:39:07.545189 1038758 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:39:07.545389 1038758 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:39:07.545448 1038758 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:39:07.545463 1038758 certs.go:256] generating profile certs ...
	I0729 14:39:07.545582 1038758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/client.key
	I0729 14:39:07.545658 1038758 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/apiserver.key.117a155a
	I0729 14:39:07.545725 1038758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/proxy-client.key
	I0729 14:39:07.545881 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:39:07.545913 1038758 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:39:07.545922 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:39:07.545945 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:39:07.545969 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:39:07.545990 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:39:07.546027 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:39:07.546679 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:39:07.582488 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:39:07.617327 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:39:07.647627 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:39:07.685799 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 14:39:07.720365 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 14:39:07.744627 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:39:07.771409 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 14:39:07.797570 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:39:07.820888 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:39:07.843714 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:39:07.867365 1038758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:39:07.884283 1038758 ssh_runner.go:195] Run: openssl version
	I0729 14:39:07.890379 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:39:07.901894 1038758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:39:07.906431 1038758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:39:07.906487 1038758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:39:07.912284 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:39:07.923393 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:39:07.934119 1038758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:39:07.938563 1038758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:39:07.938620 1038758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:39:07.944115 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:39:07.954815 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:39:07.965864 1038758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:39:07.970695 1038758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:39:07.970761 1038758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:39:07.977340 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:39:07.990416 1038758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:39:07.995446 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:39:08.001615 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:39:08.007621 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:39:08.013648 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:39:08.019525 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:39:08.025505 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:39:08.031480 1038758 kubeadm.go:392] StartCluster: {Name:no-preload-603534 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-603534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:39:08.031592 1038758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:39:08.031657 1038758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:39:08.077847 1038758 cri.go:89] found id: ""
	I0729 14:39:08.077936 1038758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:39:08.088616 1038758 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 14:39:08.088639 1038758 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 14:39:08.088704 1038758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 14:39:08.101150 1038758 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:39:08.102305 1038758 kubeconfig.go:125] found "no-preload-603534" server: "https://192.168.61.116:8443"
	I0729 14:39:08.105529 1038758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 14:39:08.117031 1038758 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.116
	I0729 14:39:08.117070 1038758 kubeadm.go:1160] stopping kube-system containers ...
	I0729 14:39:08.117085 1038758 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 14:39:08.117148 1038758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:39:08.171626 1038758 cri.go:89] found id: ""
	I0729 14:39:08.171706 1038758 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 14:39:08.190491 1038758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:39:08.200776 1038758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:39:08.200806 1038758 kubeadm.go:157] found existing configuration files:
	
	I0729 14:39:08.200873 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:39:08.211430 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:39:08.211483 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:39:08.221865 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:39:08.231668 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:39:08.231719 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:39:08.242027 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:39:08.251585 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:39:08.251639 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:39:08.261521 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:39:08.271210 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:39:08.271284 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:39:08.281112 1038758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:39:08.290948 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:08.417397 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:09.400064 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:09.590859 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:09.670134 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:09.781580 1038758 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:39:09.781719 1038758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.282592 1038758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.781923 1038758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.843114 1038758 api_server.go:72] duration metric: took 1.061535691s to wait for apiserver process to appear ...
	I0729 14:39:10.843151 1038758 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:39:10.843182 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:10.843715 1038758 api_server.go:269] stopped: https://192.168.61.116:8443/healthz: Get "https://192.168.61.116:8443/healthz": dial tcp 192.168.61.116:8443: connect: connection refused
	I0729 14:39:11.343301 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:08.207563 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:10.208276 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:09.156858 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:09.656910 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.156126 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.657149 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:11.156223 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:11.657184 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:12.156454 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:12.656896 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:13.156693 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:13.656971 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:13.993249 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:39:13.993278 1038758 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:39:13.993298 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:14.011972 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:39:14.012012 1038758 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:39:14.343228 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:14.347946 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:39:14.347983 1038758 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:39:14.844144 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:14.858278 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:39:14.858311 1038758 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:39:15.343885 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:15.350223 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0729 14:39:15.360468 1038758 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 14:39:15.360513 1038758 api_server.go:131] duration metric: took 4.517353977s to wait for apiserver health ...
	I0729 14:39:15.360524 1038758 cni.go:84] Creating CNI manager for ""
	I0729 14:39:15.360532 1038758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:39:15.362679 1038758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:39:12.293516 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:14.294107 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:15.364237 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:39:15.379974 1038758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:39:15.422444 1038758 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:39:15.441468 1038758 system_pods.go:59] 8 kube-system pods found
	I0729 14:39:15.441512 1038758 system_pods.go:61] "coredns-5cfdc65f69-tjdx4" [986cdef3-de61-4c0f-bc75-fae4f6b44a37] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 14:39:15.441525 1038758 system_pods.go:61] "etcd-no-preload-603534" [e27f5761-5322-4d88-b90a-bcff42c9dfa5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 14:39:15.441537 1038758 system_pods.go:61] "kube-apiserver-no-preload-603534" [33ed9f7c-1240-40cf-b51d-125b3473bfd5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 14:39:15.441547 1038758 system_pods.go:61] "kube-controller-manager-no-preload-603534" [f79520a2-380e-4d8a-b1ff-78c6cd3d3741] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 14:39:15.441559 1038758 system_pods.go:61] "kube-proxy-ftpk5" [a5471ad7-5fd3-49b7-8631-4ca2962761d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 14:39:15.441568 1038758 system_pods.go:61] "kube-scheduler-no-preload-603534" [860e262c-f036-4181-a0ad-8ba0058a47d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 14:39:15.441580 1038758 system_pods.go:61] "metrics-server-78fcd8795b-59sbc" [8af92987-ce8d-434f-93de-16d0adc35fa5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:39:15.441598 1038758 system_pods.go:61] "storage-provisioner" [579d0cc8-e30e-4ee3-ac55-c2f0bc5871e1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 14:39:15.441606 1038758 system_pods.go:74] duration metric: took 19.133029ms to wait for pod list to return data ...
	I0729 14:39:15.441618 1038758 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:39:15.445594 1038758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:39:15.445630 1038758 node_conditions.go:123] node cpu capacity is 2
	I0729 14:39:15.445646 1038758 node_conditions.go:105] duration metric: took 4.019018ms to run NodePressure ...
	I0729 14:39:15.445678 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:15.743404 1038758 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 14:39:15.751028 1038758 kubeadm.go:739] kubelet initialised
	I0729 14:39:15.751050 1038758 kubeadm.go:740] duration metric: took 7.619795ms waiting for restarted kubelet to initialise ...
	I0729 14:39:15.751059 1038758 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:39:15.759157 1038758 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:12.708704 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:15.208434 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:14.157127 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:14.656806 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:15.156564 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:15.656881 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:16.156239 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:16.656440 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:17.157130 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:17.656240 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:18.156161 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:18.656808 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:16.294741 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:18.797700 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:17.768132 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:20.265670 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:17.709929 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:20.206710 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:22.207809 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:19.156721 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:19.656766 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:20.156352 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:20.656788 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:21.156179 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:21.656213 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:22.156475 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:22.656275 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:23.156592 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:23.656979 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:21.294265 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:23.294366 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:25.794648 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:22.265947 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:24.266644 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:24.708214 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:27.208824 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:24.156798 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:24.656473 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:25.156551 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:25.656356 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:26.156887 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:26.656332 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:27.156494 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:27.656839 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:28.156763 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:28.656512 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:27.795415 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:30.293460 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:26.766260 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:29.265817 1038758 pod_ready.go:92] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.265851 1038758 pod_ready.go:81] duration metric: took 13.506661461s for pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.265865 1038758 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.276021 1038758 pod_ready.go:92] pod "etcd-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.276043 1038758 pod_ready.go:81] duration metric: took 10.172055ms for pod "etcd-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.276052 1038758 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.280197 1038758 pod_ready.go:92] pod "kube-apiserver-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.280215 1038758 pod_ready.go:81] duration metric: took 4.156785ms for pod "kube-apiserver-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.280223 1038758 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.284076 1038758 pod_ready.go:92] pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.284096 1038758 pod_ready.go:81] duration metric: took 3.865927ms for pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.284122 1038758 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ftpk5" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.288280 1038758 pod_ready.go:92] pod "kube-proxy-ftpk5" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.288297 1038758 pod_ready.go:81] duration metric: took 4.16843ms for pod "kube-proxy-ftpk5" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.288305 1038758 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.666771 1038758 pod_ready.go:92] pod "kube-scheduler-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.666802 1038758 pod_ready.go:81] duration metric: took 378.49001ms for pod "kube-scheduler-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.666813 1038758 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.706596 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:32.208095 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:29.156096 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:29.656289 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:30.156693 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:30.656795 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:31.156756 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:31.656888 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:32.156563 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:32.656795 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:33.156271 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:33.656562 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:32.293988 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:34.793456 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:31.674203 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:34.174002 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:34.708005 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:37.206789 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:34.157046 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:34.656398 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:35.156198 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:35.656763 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:36.156542 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:36.656994 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:37.156808 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:37.657093 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:38.156119 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:38.657017 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:36.793771 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:39.294267 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:36.676693 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:39.172713 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:41.174348 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:39.207584 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:41.707645 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:39.156909 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:39.656176 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:40.156455 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:40.656609 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:41.156891 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:41.656327 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:41.656423 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:41.701839 1039759 cri.go:89] found id: ""
	I0729 14:39:41.701863 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.701872 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:41.701878 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:41.701942 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:41.738281 1039759 cri.go:89] found id: ""
	I0729 14:39:41.738308 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.738315 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:41.738321 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:41.738377 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:41.771954 1039759 cri.go:89] found id: ""
	I0729 14:39:41.771981 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.771990 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:41.771996 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:41.772060 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:41.806157 1039759 cri.go:89] found id: ""
	I0729 14:39:41.806182 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.806190 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:41.806196 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:41.806249 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:41.841284 1039759 cri.go:89] found id: ""
	I0729 14:39:41.841312 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.841319 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:41.841325 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:41.841394 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:41.875864 1039759 cri.go:89] found id: ""
	I0729 14:39:41.875893 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.875902 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:41.875908 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:41.875962 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:41.909797 1039759 cri.go:89] found id: ""
	I0729 14:39:41.909824 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.909833 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:41.909840 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:41.909892 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:41.943886 1039759 cri.go:89] found id: ""
	I0729 14:39:41.943912 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.943920 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:41.943929 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:41.943944 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:41.983224 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:41.983254 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:42.035264 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:42.035303 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:42.049343 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:42.049369 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:42.171904 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:42.171924 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:42.171947 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:41.295209 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:43.795811 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:43.673853 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:45.674302 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:44.207555 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:46.707384 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:44.738629 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:44.753497 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:44.753582 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:44.793256 1039759 cri.go:89] found id: ""
	I0729 14:39:44.793283 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.793291 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:44.793298 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:44.793363 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:44.833698 1039759 cri.go:89] found id: ""
	I0729 14:39:44.833726 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.833733 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:44.833739 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:44.833792 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:44.876328 1039759 cri.go:89] found id: ""
	I0729 14:39:44.876357 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.876366 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:44.876372 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:44.876471 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:44.918091 1039759 cri.go:89] found id: ""
	I0729 14:39:44.918121 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.918132 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:44.918140 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:44.918210 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:44.965105 1039759 cri.go:89] found id: ""
	I0729 14:39:44.965137 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.965149 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:44.965157 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:44.965228 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:45.014119 1039759 cri.go:89] found id: ""
	I0729 14:39:45.014150 1039759 logs.go:276] 0 containers: []
	W0729 14:39:45.014162 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:45.014170 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:45.014243 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:45.059826 1039759 cri.go:89] found id: ""
	I0729 14:39:45.059858 1039759 logs.go:276] 0 containers: []
	W0729 14:39:45.059870 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:45.059879 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:45.059946 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:45.099666 1039759 cri.go:89] found id: ""
	I0729 14:39:45.099706 1039759 logs.go:276] 0 containers: []
	W0729 14:39:45.099717 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:45.099730 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:45.099748 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:45.144219 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:45.144263 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:45.199719 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:45.199754 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:45.214225 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:45.214260 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:45.289090 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:45.289119 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:45.289138 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:47.860797 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:47.874523 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:47.874606 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:47.913570 1039759 cri.go:89] found id: ""
	I0729 14:39:47.913599 1039759 logs.go:276] 0 containers: []
	W0729 14:39:47.913608 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:47.913615 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:47.913674 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:47.946699 1039759 cri.go:89] found id: ""
	I0729 14:39:47.946725 1039759 logs.go:276] 0 containers: []
	W0729 14:39:47.946734 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:47.946740 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:47.946792 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:47.986492 1039759 cri.go:89] found id: ""
	I0729 14:39:47.986533 1039759 logs.go:276] 0 containers: []
	W0729 14:39:47.986546 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:47.986554 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:47.986635 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:48.027232 1039759 cri.go:89] found id: ""
	I0729 14:39:48.027260 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.027268 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:48.027274 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:48.027327 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:48.065119 1039759 cri.go:89] found id: ""
	I0729 14:39:48.065145 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.065153 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:48.065159 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:48.065217 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:48.105087 1039759 cri.go:89] found id: ""
	I0729 14:39:48.105119 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.105128 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:48.105134 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:48.105199 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:48.144684 1039759 cri.go:89] found id: ""
	I0729 14:39:48.144718 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.144730 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:48.144737 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:48.144816 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:48.180350 1039759 cri.go:89] found id: ""
	I0729 14:39:48.180380 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.180389 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:48.180401 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:48.180436 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:48.259859 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:48.259905 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:48.301132 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:48.301163 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:48.352753 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:48.352795 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:48.365936 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:48.365969 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:48.434634 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:46.293123 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:48.293674 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:50.294113 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:47.674411 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:50.173727 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:48.707887 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:51.207444 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:50.934903 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:50.948702 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:50.948787 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:50.982889 1039759 cri.go:89] found id: ""
	I0729 14:39:50.982917 1039759 logs.go:276] 0 containers: []
	W0729 14:39:50.982927 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:50.982933 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:50.983010 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:51.020679 1039759 cri.go:89] found id: ""
	I0729 14:39:51.020713 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.020726 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:51.020734 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:51.020818 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:51.055114 1039759 cri.go:89] found id: ""
	I0729 14:39:51.055147 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.055158 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:51.055166 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:51.055237 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:51.089053 1039759 cri.go:89] found id: ""
	I0729 14:39:51.089087 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.089099 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:51.089108 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:51.089184 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:51.125823 1039759 cri.go:89] found id: ""
	I0729 14:39:51.125851 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.125861 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:51.125868 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:51.125938 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:51.162645 1039759 cri.go:89] found id: ""
	I0729 14:39:51.162683 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.162694 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:51.162702 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:51.162767 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:51.196820 1039759 cri.go:89] found id: ""
	I0729 14:39:51.196849 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.196857 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:51.196864 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:51.196937 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:51.236442 1039759 cri.go:89] found id: ""
	I0729 14:39:51.236469 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.236479 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:51.236491 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:51.236506 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:51.317077 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:51.317101 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:51.317119 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:51.398118 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:51.398172 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:51.437096 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:51.437128 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:51.488949 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:51.488992 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:52.795544 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:55.294184 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:52.174241 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:54.672702 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:53.207592 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:55.706971 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:54.004536 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:54.019400 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:54.019480 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:54.054592 1039759 cri.go:89] found id: ""
	I0729 14:39:54.054626 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.054639 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:54.054647 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:54.054712 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:54.090184 1039759 cri.go:89] found id: ""
	I0729 14:39:54.090217 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.090227 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:54.090234 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:54.090304 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:54.129977 1039759 cri.go:89] found id: ""
	I0729 14:39:54.130007 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.130016 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:54.130022 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:54.130081 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:54.170940 1039759 cri.go:89] found id: ""
	I0729 14:39:54.170970 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.170980 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:54.170988 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:54.171053 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:54.206197 1039759 cri.go:89] found id: ""
	I0729 14:39:54.206224 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.206244 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:54.206251 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:54.206340 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:54.246929 1039759 cri.go:89] found id: ""
	I0729 14:39:54.246963 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.246973 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:54.246980 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:54.247049 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:54.286202 1039759 cri.go:89] found id: ""
	I0729 14:39:54.286231 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.286240 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:54.286245 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:54.286315 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:54.321784 1039759 cri.go:89] found id: ""
	I0729 14:39:54.321815 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.321824 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:54.321837 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:54.321860 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:54.363159 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:54.363187 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:54.416151 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:54.416194 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:54.429824 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:54.429852 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:54.506348 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:54.506373 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:54.506390 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:57.094810 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:57.108163 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:57.108238 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:57.143556 1039759 cri.go:89] found id: ""
	I0729 14:39:57.143588 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.143601 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:57.143608 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:57.143678 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:57.177664 1039759 cri.go:89] found id: ""
	I0729 14:39:57.177695 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.177706 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:57.177714 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:57.177801 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:57.212046 1039759 cri.go:89] found id: ""
	I0729 14:39:57.212106 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.212231 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:57.212249 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:57.212323 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:57.252518 1039759 cri.go:89] found id: ""
	I0729 14:39:57.252549 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.252558 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:57.252564 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:57.252677 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:57.287045 1039759 cri.go:89] found id: ""
	I0729 14:39:57.287069 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.287077 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:57.287084 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:57.287141 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:57.324553 1039759 cri.go:89] found id: ""
	I0729 14:39:57.324588 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.324599 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:57.324607 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:57.324684 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:57.358761 1039759 cri.go:89] found id: ""
	I0729 14:39:57.358801 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.358811 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:57.358819 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:57.358898 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:57.402023 1039759 cri.go:89] found id: ""
	I0729 14:39:57.402051 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.402062 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:57.402074 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:57.402094 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:57.445600 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:57.445632 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:57.501876 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:57.501911 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:57.518264 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:57.518299 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:57.593247 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:57.593274 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:57.593292 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:57.793782 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:59.794287 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:56.673243 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:59.174416 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:57.707618 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:00.208574 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:00.181109 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:00.194553 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:00.194641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:00.237761 1039759 cri.go:89] found id: ""
	I0729 14:40:00.237801 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.237814 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:00.237829 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:00.237901 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:00.273113 1039759 cri.go:89] found id: ""
	I0729 14:40:00.273145 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.273157 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:00.273166 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:00.273232 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:00.312136 1039759 cri.go:89] found id: ""
	I0729 14:40:00.312169 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.312176 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:00.312182 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:00.312249 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:00.349610 1039759 cri.go:89] found id: ""
	I0729 14:40:00.349642 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.349654 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:00.349662 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:00.349792 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:00.384121 1039759 cri.go:89] found id: ""
	I0729 14:40:00.384148 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.384157 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:00.384163 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:00.384233 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:00.419684 1039759 cri.go:89] found id: ""
	I0729 14:40:00.419720 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.419731 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:00.419739 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:00.419809 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:00.453905 1039759 cri.go:89] found id: ""
	I0729 14:40:00.453937 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.453945 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:00.453951 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:00.454023 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:00.490124 1039759 cri.go:89] found id: ""
	I0729 14:40:00.490149 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.490158 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:00.490168 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:00.490185 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:00.562684 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:00.562713 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:00.562735 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:00.643860 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:00.643899 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:00.683247 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:00.683276 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:00.734131 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:00.734166 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:03.249468 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:03.262712 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:03.262788 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:03.300774 1039759 cri.go:89] found id: ""
	I0729 14:40:03.300801 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.300816 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:03.300823 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:03.300891 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:03.335367 1039759 cri.go:89] found id: ""
	I0729 14:40:03.335398 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.335409 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:03.335419 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:03.335488 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:03.375683 1039759 cri.go:89] found id: ""
	I0729 14:40:03.375717 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.375728 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:03.375734 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:03.375814 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:03.409593 1039759 cri.go:89] found id: ""
	I0729 14:40:03.409623 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.409631 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:03.409637 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:03.409711 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:03.444531 1039759 cri.go:89] found id: ""
	I0729 14:40:03.444566 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.444578 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:03.444585 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:03.444655 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:03.479446 1039759 cri.go:89] found id: ""
	I0729 14:40:03.479476 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.479487 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:03.479495 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:03.479563 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:03.517277 1039759 cri.go:89] found id: ""
	I0729 14:40:03.517311 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.517321 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:03.517329 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:03.517396 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:03.556343 1039759 cri.go:89] found id: ""
	I0729 14:40:03.556373 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.556381 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:03.556391 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:03.556422 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:03.610156 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:03.610196 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:03.624776 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:03.624812 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:03.696584 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:03.696609 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:03.696625 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:03.775066 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:03.775109 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:01.794683 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:03.795112 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:01.673980 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:04.173900 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:02.706731 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:04.707655 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:07.207027 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:06.319720 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:06.332865 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:06.332937 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:06.366576 1039759 cri.go:89] found id: ""
	I0729 14:40:06.366608 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.366631 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:06.366639 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:06.366730 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:06.402710 1039759 cri.go:89] found id: ""
	I0729 14:40:06.402735 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.402743 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:06.402748 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:06.402804 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:06.439048 1039759 cri.go:89] found id: ""
	I0729 14:40:06.439095 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.439116 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:06.439125 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:06.439196 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:06.473407 1039759 cri.go:89] found id: ""
	I0729 14:40:06.473443 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.473456 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:06.473464 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:06.473544 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:06.507278 1039759 cri.go:89] found id: ""
	I0729 14:40:06.507309 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.507319 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:06.507327 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:06.507396 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:06.541573 1039759 cri.go:89] found id: ""
	I0729 14:40:06.541600 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.541608 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:06.541617 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:06.541679 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:06.587666 1039759 cri.go:89] found id: ""
	I0729 14:40:06.587697 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.587707 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:06.587714 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:06.587773 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:06.622415 1039759 cri.go:89] found id: ""
	I0729 14:40:06.622448 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.622459 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:06.622478 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:06.622497 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:06.659987 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:06.660019 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:06.716303 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:06.716338 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:06.731051 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:06.731076 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:06.809014 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:06.809045 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:06.809064 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:06.293552 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:08.294453 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:10.295216 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:06.674445 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:09.174349 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:09.207784 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:11.208318 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:09.387843 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:09.401894 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:09.401984 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:09.439385 1039759 cri.go:89] found id: ""
	I0729 14:40:09.439425 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.439438 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:09.439446 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:09.439506 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:09.474307 1039759 cri.go:89] found id: ""
	I0729 14:40:09.474340 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.474352 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:09.474361 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:09.474434 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:09.508198 1039759 cri.go:89] found id: ""
	I0729 14:40:09.508233 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.508245 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:09.508253 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:09.508325 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:09.543729 1039759 cri.go:89] found id: ""
	I0729 14:40:09.543762 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.543772 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:09.543779 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:09.543847 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:09.598723 1039759 cri.go:89] found id: ""
	I0729 14:40:09.598760 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.598769 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:09.598775 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:09.598841 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:09.636009 1039759 cri.go:89] found id: ""
	I0729 14:40:09.636038 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.636050 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:09.636058 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:09.636126 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:09.675590 1039759 cri.go:89] found id: ""
	I0729 14:40:09.675618 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.675628 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:09.675636 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:09.675698 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:09.710331 1039759 cri.go:89] found id: ""
	I0729 14:40:09.710374 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.710385 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:09.710397 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:09.710416 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:09.790014 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:09.790046 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:09.790064 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:09.870233 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:09.870278 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:09.910421 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:09.910454 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:09.962429 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:09.962474 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:12.476775 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:12.490804 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:12.490875 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:12.529435 1039759 cri.go:89] found id: ""
	I0729 14:40:12.529466 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.529478 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:12.529485 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:12.529551 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:12.564769 1039759 cri.go:89] found id: ""
	I0729 14:40:12.564806 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.564818 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:12.564826 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:12.564912 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:12.600253 1039759 cri.go:89] found id: ""
	I0729 14:40:12.600285 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.600296 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:12.600304 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:12.600367 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:12.636112 1039759 cri.go:89] found id: ""
	I0729 14:40:12.636146 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.636155 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:12.636161 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:12.636216 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:12.675592 1039759 cri.go:89] found id: ""
	I0729 14:40:12.675621 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.675631 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:12.675639 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:12.675711 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:12.711438 1039759 cri.go:89] found id: ""
	I0729 14:40:12.711469 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.711480 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:12.711488 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:12.711554 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:12.745497 1039759 cri.go:89] found id: ""
	I0729 14:40:12.745524 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.745533 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:12.745539 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:12.745598 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:12.778934 1039759 cri.go:89] found id: ""
	I0729 14:40:12.778966 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.778977 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:12.778991 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:12.779010 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:12.854721 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:12.854759 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:12.854780 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:12.932118 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:12.932158 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:12.974429 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:12.974461 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:13.030073 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:13.030108 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:12.795056 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:15.295125 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:11.674169 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:14.173503 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:16.175691 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:13.707268 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:15.708540 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:15.544245 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:15.559013 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:15.559090 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:15.594018 1039759 cri.go:89] found id: ""
	I0729 14:40:15.594051 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.594064 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:15.594076 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:15.594147 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:15.630734 1039759 cri.go:89] found id: ""
	I0729 14:40:15.630762 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.630771 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:15.630777 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:15.630832 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:15.666159 1039759 cri.go:89] found id: ""
	I0729 14:40:15.666191 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.666202 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:15.666210 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:15.666275 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:15.701058 1039759 cri.go:89] found id: ""
	I0729 14:40:15.701088 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.701098 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:15.701115 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:15.701170 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:15.737006 1039759 cri.go:89] found id: ""
	I0729 14:40:15.737040 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.737052 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:15.737066 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:15.737139 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:15.775678 1039759 cri.go:89] found id: ""
	I0729 14:40:15.775706 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.775718 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:15.775728 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:15.775795 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:15.812239 1039759 cri.go:89] found id: ""
	I0729 14:40:15.812268 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.812276 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:15.812283 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:15.812348 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:15.847653 1039759 cri.go:89] found id: ""
	I0729 14:40:15.847682 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.847693 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:15.847707 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:15.847725 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:15.903094 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:15.903137 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:15.917060 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:15.917093 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:15.993458 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:15.993481 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:15.993499 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:16.073369 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:16.073409 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:18.616107 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:18.630263 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:18.630340 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:18.668228 1039759 cri.go:89] found id: ""
	I0729 14:40:18.668261 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.668271 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:18.668279 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:18.668342 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:18.706863 1039759 cri.go:89] found id: ""
	I0729 14:40:18.706891 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.706902 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:18.706909 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:18.706978 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:18.739703 1039759 cri.go:89] found id: ""
	I0729 14:40:18.739728 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.739736 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:18.739742 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:18.739796 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:18.777025 1039759 cri.go:89] found id: ""
	I0729 14:40:18.777066 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.777077 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:18.777085 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:18.777158 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:18.814000 1039759 cri.go:89] found id: ""
	I0729 14:40:18.814026 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.814039 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:18.814051 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:18.814116 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:18.851027 1039759 cri.go:89] found id: ""
	I0729 14:40:18.851058 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.851069 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:18.851076 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:18.851151 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:17.796245 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:20.293964 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:18.673560 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:21.173099 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:18.207376 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:20.707629 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:18.903888 1039759 cri.go:89] found id: ""
	I0729 14:40:18.903920 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.903932 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:18.903941 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:18.904002 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:18.938756 1039759 cri.go:89] found id: ""
	I0729 14:40:18.938784 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.938791 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:18.938801 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:18.938814 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:18.988482 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:18.988520 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:19.002145 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:19.002177 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:19.085372 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:19.085397 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:19.085424 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:19.171294 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:19.171387 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:21.709578 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:21.722874 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:21.722941 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:21.768110 1039759 cri.go:89] found id: ""
	I0729 14:40:21.768139 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.768150 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:21.768156 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:21.768210 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:21.808853 1039759 cri.go:89] found id: ""
	I0729 14:40:21.808886 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.808897 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:21.808905 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:21.808974 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:21.843432 1039759 cri.go:89] found id: ""
	I0729 14:40:21.843472 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.843484 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:21.843493 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:21.843576 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:21.876497 1039759 cri.go:89] found id: ""
	I0729 14:40:21.876535 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.876547 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:21.876555 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:21.876633 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:21.911528 1039759 cri.go:89] found id: ""
	I0729 14:40:21.911556 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.911565 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:21.911571 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:21.911626 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:21.944514 1039759 cri.go:89] found id: ""
	I0729 14:40:21.944548 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.944560 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:21.944569 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:21.944641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:21.978113 1039759 cri.go:89] found id: ""
	I0729 14:40:21.978151 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.978162 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:21.978170 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:21.978233 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:22.012390 1039759 cri.go:89] found id: ""
	I0729 14:40:22.012438 1039759 logs.go:276] 0 containers: []
	W0729 14:40:22.012449 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:22.012461 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:22.012484 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:22.027921 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:22.027952 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:22.095087 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:22.095115 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:22.095132 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:22.178462 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:22.178495 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:22.220155 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:22.220188 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:22.794431 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:25.295391 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:23.174050 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:25.673437 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:22.708012 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:25.207491 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:24.771932 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:24.784764 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:24.784851 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:24.820445 1039759 cri.go:89] found id: ""
	I0729 14:40:24.820473 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.820485 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:24.820501 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:24.820569 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:24.854753 1039759 cri.go:89] found id: ""
	I0729 14:40:24.854786 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.854796 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:24.854802 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:24.854856 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:24.889200 1039759 cri.go:89] found id: ""
	I0729 14:40:24.889230 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.889242 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:24.889250 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:24.889312 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:24.932383 1039759 cri.go:89] found id: ""
	I0729 14:40:24.932435 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.932447 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:24.932454 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:24.932515 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:24.971830 1039759 cri.go:89] found id: ""
	I0729 14:40:24.971859 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.971871 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:24.971879 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:24.971936 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:25.014336 1039759 cri.go:89] found id: ""
	I0729 14:40:25.014374 1039759 logs.go:276] 0 containers: []
	W0729 14:40:25.014386 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:25.014397 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:25.014464 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:25.048131 1039759 cri.go:89] found id: ""
	I0729 14:40:25.048161 1039759 logs.go:276] 0 containers: []
	W0729 14:40:25.048171 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:25.048177 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:25.048232 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:25.089830 1039759 cri.go:89] found id: ""
	I0729 14:40:25.089866 1039759 logs.go:276] 0 containers: []
	W0729 14:40:25.089878 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:25.089893 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:25.089907 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:25.172078 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:25.172113 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:25.221629 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:25.221661 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:25.291761 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:25.291806 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:25.314521 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:25.314559 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:25.402738 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:27.903335 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:27.918335 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:27.918413 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:27.951929 1039759 cri.go:89] found id: ""
	I0729 14:40:27.951955 1039759 logs.go:276] 0 containers: []
	W0729 14:40:27.951966 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:27.951972 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:27.952029 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:27.986229 1039759 cri.go:89] found id: ""
	I0729 14:40:27.986266 1039759 logs.go:276] 0 containers: []
	W0729 14:40:27.986279 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:27.986287 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:27.986352 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:28.019467 1039759 cri.go:89] found id: ""
	I0729 14:40:28.019504 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.019517 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:28.019524 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:28.019590 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:28.053762 1039759 cri.go:89] found id: ""
	I0729 14:40:28.053790 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.053799 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:28.053806 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:28.053858 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:28.088947 1039759 cri.go:89] found id: ""
	I0729 14:40:28.088975 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.088989 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:28.088996 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:28.089070 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:28.130018 1039759 cri.go:89] found id: ""
	I0729 14:40:28.130052 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.130064 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:28.130072 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:28.130143 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:28.163682 1039759 cri.go:89] found id: ""
	I0729 14:40:28.163715 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.163725 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:28.163734 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:28.163802 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:28.199698 1039759 cri.go:89] found id: ""
	I0729 14:40:28.199732 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.199744 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:28.199757 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:28.199774 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:28.253735 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:28.253776 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:28.267786 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:28.267825 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:28.337218 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:28.337250 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:28.337265 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:28.419644 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:28.419688 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:27.793963 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:30.293775 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:28.172846 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:30.173544 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:27.707884 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:29.708174 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:32.206661 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:30.958707 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:30.972073 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:30.972146 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:31.016629 1039759 cri.go:89] found id: ""
	I0729 14:40:31.016662 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.016673 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:31.016681 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:31.016747 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:31.058891 1039759 cri.go:89] found id: ""
	I0729 14:40:31.058921 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.058930 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:31.058936 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:31.059004 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:31.096599 1039759 cri.go:89] found id: ""
	I0729 14:40:31.096633 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.096645 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:31.096654 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:31.096741 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:31.143525 1039759 cri.go:89] found id: ""
	I0729 14:40:31.143554 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.143562 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:31.143568 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:31.143628 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:31.180188 1039759 cri.go:89] found id: ""
	I0729 14:40:31.180220 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.180230 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:31.180239 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:31.180310 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:31.219995 1039759 cri.go:89] found id: ""
	I0729 14:40:31.220026 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.220037 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:31.220045 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:31.220108 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:31.254137 1039759 cri.go:89] found id: ""
	I0729 14:40:31.254182 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.254192 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:31.254200 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:31.254272 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:31.288065 1039759 cri.go:89] found id: ""
	I0729 14:40:31.288098 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.288109 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:31.288122 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:31.288137 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:31.341299 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:31.341338 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:31.355357 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:31.355387 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:31.427414 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:31.427439 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:31.427453 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:31.508372 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:31.508439 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:32.294256 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:34.295131 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:32.174315 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:34.674462 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:34.208183 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:36.707763 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:34.052770 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:34.066300 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:34.066366 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:34.104242 1039759 cri.go:89] found id: ""
	I0729 14:40:34.104278 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.104290 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:34.104299 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:34.104367 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:34.143092 1039759 cri.go:89] found id: ""
	I0729 14:40:34.143125 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.143137 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:34.143145 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:34.143216 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:34.177966 1039759 cri.go:89] found id: ""
	I0729 14:40:34.177993 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.178002 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:34.178011 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:34.178098 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:34.218325 1039759 cri.go:89] found id: ""
	I0729 14:40:34.218351 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.218361 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:34.218369 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:34.218437 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:34.256632 1039759 cri.go:89] found id: ""
	I0729 14:40:34.256665 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.256675 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:34.256683 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:34.256753 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:34.290713 1039759 cri.go:89] found id: ""
	I0729 14:40:34.290739 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.290747 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:34.290753 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:34.290816 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:34.331345 1039759 cri.go:89] found id: ""
	I0729 14:40:34.331378 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.331389 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:34.331397 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:34.331468 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:34.370184 1039759 cri.go:89] found id: ""
	I0729 14:40:34.370214 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.370226 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:34.370239 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:34.370256 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:34.448667 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:34.448709 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:34.492943 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:34.492974 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:34.548784 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:34.548827 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:34.565353 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:34.565389 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:34.639411 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:37.140039 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:37.153732 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:37.153806 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:37.189360 1039759 cri.go:89] found id: ""
	I0729 14:40:37.189389 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.189398 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:37.189404 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:37.189474 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:37.225790 1039759 cri.go:89] found id: ""
	I0729 14:40:37.225820 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.225831 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:37.225839 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:37.225914 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:37.261742 1039759 cri.go:89] found id: ""
	I0729 14:40:37.261772 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.261782 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:37.261791 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:37.261862 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:37.295791 1039759 cri.go:89] found id: ""
	I0729 14:40:37.295826 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.295835 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:37.295843 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:37.295908 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:37.331290 1039759 cri.go:89] found id: ""
	I0729 14:40:37.331324 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.331334 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:37.331343 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:37.331413 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:37.366150 1039759 cri.go:89] found id: ""
	I0729 14:40:37.366183 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.366195 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:37.366203 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:37.366273 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:37.400983 1039759 cri.go:89] found id: ""
	I0729 14:40:37.401019 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.401030 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:37.401038 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:37.401110 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:37.435333 1039759 cri.go:89] found id: ""
	I0729 14:40:37.435368 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.435379 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:37.435391 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:37.435407 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:37.488020 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:37.488057 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:37.501543 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:37.501573 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:37.576006 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:37.576033 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:37.576050 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:37.658600 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:37.658641 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:36.794615 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:38.795414 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:37.175174 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:39.674361 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:39.207946 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:41.707724 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:40.200763 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:40.216048 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:40.216121 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:40.253969 1039759 cri.go:89] found id: ""
	I0729 14:40:40.253996 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.254005 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:40.254012 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:40.254078 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:40.289557 1039759 cri.go:89] found id: ""
	I0729 14:40:40.289595 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.289608 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:40.289616 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:40.289698 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:40.329756 1039759 cri.go:89] found id: ""
	I0729 14:40:40.329799 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.329823 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:40.329833 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:40.329906 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:40.365281 1039759 cri.go:89] found id: ""
	I0729 14:40:40.365315 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.365327 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:40.365335 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:40.365403 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:40.401300 1039759 cri.go:89] found id: ""
	I0729 14:40:40.401327 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.401336 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:40.401342 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:40.401398 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:40.435679 1039759 cri.go:89] found id: ""
	I0729 14:40:40.435710 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.435719 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:40.435726 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:40.435781 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:40.475825 1039759 cri.go:89] found id: ""
	I0729 14:40:40.475851 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.475859 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:40.475866 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:40.475926 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:40.512153 1039759 cri.go:89] found id: ""
	I0729 14:40:40.512184 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.512191 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:40.512202 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:40.512215 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:40.563983 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:40.564022 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:40.578823 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:40.578853 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:40.650282 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:40.650311 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:40.650328 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:40.734933 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:40.734980 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:43.280095 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:43.294284 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:43.294361 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:43.328862 1039759 cri.go:89] found id: ""
	I0729 14:40:43.328890 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.328899 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:43.328905 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:43.328971 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:43.366321 1039759 cri.go:89] found id: ""
	I0729 14:40:43.366364 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.366376 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:43.366384 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:43.366459 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:43.400189 1039759 cri.go:89] found id: ""
	I0729 14:40:43.400220 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.400229 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:43.400235 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:43.400299 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:43.438521 1039759 cri.go:89] found id: ""
	I0729 14:40:43.438562 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.438582 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:43.438594 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:43.438665 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:43.473931 1039759 cri.go:89] found id: ""
	I0729 14:40:43.473958 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.473966 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:43.473972 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:43.474035 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:43.511460 1039759 cri.go:89] found id: ""
	I0729 14:40:43.511490 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.511497 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:43.511506 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:43.511563 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:43.547255 1039759 cri.go:89] found id: ""
	I0729 14:40:43.547290 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.547301 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:43.547309 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:43.547375 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:43.582384 1039759 cri.go:89] found id: ""
	I0729 14:40:43.582418 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.582428 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:43.582441 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:43.582459 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:43.595747 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:43.595780 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:43.665389 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:43.665413 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:43.665427 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:43.752669 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:43.752712 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:43.797239 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:43.797272 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:41.294242 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:43.294985 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:45.794449 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:42.173495 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:44.173830 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:44.207593 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:46.706855 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:46.352841 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:46.368204 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:46.368278 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:46.406661 1039759 cri.go:89] found id: ""
	I0729 14:40:46.406687 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.406695 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:46.406701 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:46.406761 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:46.443728 1039759 cri.go:89] found id: ""
	I0729 14:40:46.443760 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.443771 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:46.443778 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:46.443845 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:46.477632 1039759 cri.go:89] found id: ""
	I0729 14:40:46.477666 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.477677 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:46.477686 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:46.477754 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:46.512510 1039759 cri.go:89] found id: ""
	I0729 14:40:46.512538 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.512549 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:46.512557 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:46.512629 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:46.550803 1039759 cri.go:89] found id: ""
	I0729 14:40:46.550834 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.550843 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:46.550848 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:46.550914 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:46.591610 1039759 cri.go:89] found id: ""
	I0729 14:40:46.591640 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.591652 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:46.591661 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:46.591723 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:46.631090 1039759 cri.go:89] found id: ""
	I0729 14:40:46.631122 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.631132 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:46.631139 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:46.631199 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:46.670510 1039759 cri.go:89] found id: ""
	I0729 14:40:46.670542 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.670554 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:46.670573 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:46.670590 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:46.725560 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:46.725594 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:46.739348 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:46.739372 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:46.812850 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:46.812874 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:46.812892 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:46.892922 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:46.892964 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:47.795538 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:50.293685 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:46.674514 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:49.174577 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:48.708243 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:51.207168 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:49.438741 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:49.452505 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:49.452588 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:49.487294 1039759 cri.go:89] found id: ""
	I0729 14:40:49.487323 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.487331 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:49.487340 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:49.487407 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:49.521783 1039759 cri.go:89] found id: ""
	I0729 14:40:49.521816 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.521828 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:49.521836 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:49.521901 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:49.557039 1039759 cri.go:89] found id: ""
	I0729 14:40:49.557075 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.557086 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:49.557094 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:49.557162 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:49.590431 1039759 cri.go:89] found id: ""
	I0729 14:40:49.590462 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.590474 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:49.590494 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:49.590574 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:49.626230 1039759 cri.go:89] found id: ""
	I0729 14:40:49.626260 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.626268 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:49.626274 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:49.626339 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:49.662030 1039759 cri.go:89] found id: ""
	I0729 14:40:49.662060 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.662068 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:49.662075 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:49.662130 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:49.699988 1039759 cri.go:89] found id: ""
	I0729 14:40:49.700019 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.700035 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:49.700076 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:49.700144 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:49.736830 1039759 cri.go:89] found id: ""
	I0729 14:40:49.736864 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.736873 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:49.736882 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:49.736895 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:49.775670 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:49.775703 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:49.830820 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:49.830853 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:49.846374 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:49.846407 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:49.917475 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:49.917502 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:49.917520 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:52.499291 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:52.513571 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:52.513641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:52.547437 1039759 cri.go:89] found id: ""
	I0729 14:40:52.547474 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.547487 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:52.547495 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:52.547559 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:52.587664 1039759 cri.go:89] found id: ""
	I0729 14:40:52.587705 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.587718 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:52.587726 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:52.587799 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:52.630642 1039759 cri.go:89] found id: ""
	I0729 14:40:52.630670 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.630678 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:52.630684 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:52.630740 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:52.665978 1039759 cri.go:89] found id: ""
	I0729 14:40:52.666010 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.666022 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:52.666030 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:52.666103 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:52.701111 1039759 cri.go:89] found id: ""
	I0729 14:40:52.701140 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.701148 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:52.701155 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:52.701211 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:52.744219 1039759 cri.go:89] found id: ""
	I0729 14:40:52.744247 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.744257 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:52.744265 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:52.744329 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:52.781081 1039759 cri.go:89] found id: ""
	I0729 14:40:52.781113 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.781122 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:52.781128 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:52.781198 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:52.817938 1039759 cri.go:89] found id: ""
	I0729 14:40:52.817974 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.817985 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:52.817999 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:52.818016 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:52.895387 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:52.895416 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:52.895433 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:52.976313 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:52.976356 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:53.013814 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:53.013852 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:53.065901 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:53.065937 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:52.798083 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:55.293459 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:51.674103 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:54.174456 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:53.208082 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:55.707719 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:55.580590 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:55.595023 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:55.595108 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:55.631449 1039759 cri.go:89] found id: ""
	I0729 14:40:55.631479 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.631487 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:55.631494 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:55.631551 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:55.666245 1039759 cri.go:89] found id: ""
	I0729 14:40:55.666274 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.666283 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:55.666296 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:55.666364 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:55.706582 1039759 cri.go:89] found id: ""
	I0729 14:40:55.706611 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.706621 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:55.706629 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:55.706696 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:55.741930 1039759 cri.go:89] found id: ""
	I0729 14:40:55.741962 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.741973 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:55.741990 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:55.742058 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:55.781440 1039759 cri.go:89] found id: ""
	I0729 14:40:55.781475 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.781486 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:55.781494 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:55.781599 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:55.825329 1039759 cri.go:89] found id: ""
	I0729 14:40:55.825366 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.825377 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:55.825387 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:55.825466 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:55.860834 1039759 cri.go:89] found id: ""
	I0729 14:40:55.860866 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.860878 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:55.860886 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:55.860950 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:55.895460 1039759 cri.go:89] found id: ""
	I0729 14:40:55.895492 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.895502 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:55.895514 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:55.895531 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:55.951739 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:55.951781 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:55.965760 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:55.965792 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:56.044422 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:56.044458 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:56.044477 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:56.123669 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:56.123714 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:58.668279 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:58.682912 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:58.682974 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:58.718757 1039759 cri.go:89] found id: ""
	I0729 14:40:58.718787 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.718798 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:58.718807 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:58.718861 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:58.756986 1039759 cri.go:89] found id: ""
	I0729 14:40:58.757015 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.757025 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:58.757031 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:58.757092 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:58.797572 1039759 cri.go:89] found id: ""
	I0729 14:40:58.797600 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.797611 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:58.797620 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:58.797689 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:58.839410 1039759 cri.go:89] found id: ""
	I0729 14:40:58.839442 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.839453 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:58.839461 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:58.839523 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:57.293935 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:59.294805 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:56.673078 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:58.674177 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:01.173709 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:57.708051 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:00.207822 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:02.208128 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:58.874477 1039759 cri.go:89] found id: ""
	I0729 14:40:58.874508 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.874519 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:58.874528 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:58.874602 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:58.910248 1039759 cri.go:89] found id: ""
	I0729 14:40:58.910281 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.910296 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:58.910307 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:58.910368 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:58.944845 1039759 cri.go:89] found id: ""
	I0729 14:40:58.944879 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.944890 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:58.944896 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:58.944955 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:58.978818 1039759 cri.go:89] found id: ""
	I0729 14:40:58.978854 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.978867 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:58.978879 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:58.978898 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:59.018961 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:59.018993 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:59.069883 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:59.069920 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:59.083277 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:59.083304 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:59.159470 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:59.159494 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:59.159511 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:01.746915 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:01.759883 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:01.759949 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:01.796563 1039759 cri.go:89] found id: ""
	I0729 14:41:01.796589 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.796602 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:01.796608 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:01.796691 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:01.831464 1039759 cri.go:89] found id: ""
	I0729 14:41:01.831499 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.831511 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:01.831520 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:01.831586 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:01.868633 1039759 cri.go:89] found id: ""
	I0729 14:41:01.868660 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.868668 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:01.868674 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:01.868732 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:01.903154 1039759 cri.go:89] found id: ""
	I0729 14:41:01.903183 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.903194 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:01.903202 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:01.903272 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:01.938256 1039759 cri.go:89] found id: ""
	I0729 14:41:01.938292 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.938304 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:01.938312 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:01.938384 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:01.978117 1039759 cri.go:89] found id: ""
	I0729 14:41:01.978147 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.978159 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:01.978168 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:01.978242 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:02.014061 1039759 cri.go:89] found id: ""
	I0729 14:41:02.014089 1039759 logs.go:276] 0 containers: []
	W0729 14:41:02.014100 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:02.014108 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:02.014176 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:02.050133 1039759 cri.go:89] found id: ""
	I0729 14:41:02.050165 1039759 logs.go:276] 0 containers: []
	W0729 14:41:02.050177 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:02.050189 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:02.050206 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:02.101188 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:02.101253 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:02.114343 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:02.114369 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:02.190309 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:02.190338 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:02.190354 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:02.266895 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:02.266939 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:01.794976 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:04.295199 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:03.176713 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:05.673543 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:04.708032 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:07.207702 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:04.809474 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:04.824652 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:04.824725 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:04.858442 1039759 cri.go:89] found id: ""
	I0729 14:41:04.858474 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.858483 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:04.858490 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:04.858542 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:04.893199 1039759 cri.go:89] found id: ""
	I0729 14:41:04.893229 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.893237 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:04.893243 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:04.893297 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:04.929480 1039759 cri.go:89] found id: ""
	I0729 14:41:04.929512 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.929524 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:04.929532 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:04.929601 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:04.965097 1039759 cri.go:89] found id: ""
	I0729 14:41:04.965127 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.965139 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:04.965147 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:04.965228 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:05.003419 1039759 cri.go:89] found id: ""
	I0729 14:41:05.003449 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.003460 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:05.003467 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:05.003557 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:05.037408 1039759 cri.go:89] found id: ""
	I0729 14:41:05.037439 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.037451 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:05.037458 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:05.037527 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:05.072909 1039759 cri.go:89] found id: ""
	I0729 14:41:05.072942 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.072953 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:05.072961 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:05.073034 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:05.123731 1039759 cri.go:89] found id: ""
	I0729 14:41:05.123764 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.123776 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:05.123787 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:05.123802 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:05.188687 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:05.188732 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:05.204119 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:05.204160 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:05.294702 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:05.294732 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:05.294750 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:05.377412 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:05.377456 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:07.923437 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:07.937633 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:07.937711 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:07.976813 1039759 cri.go:89] found id: ""
	I0729 14:41:07.976850 1039759 logs.go:276] 0 containers: []
	W0729 14:41:07.976861 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:07.976872 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:07.976946 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:08.013051 1039759 cri.go:89] found id: ""
	I0729 14:41:08.013089 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.013100 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:08.013109 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:08.013177 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:08.047372 1039759 cri.go:89] found id: ""
	I0729 14:41:08.047404 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.047413 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:08.047420 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:08.047477 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:08.080555 1039759 cri.go:89] found id: ""
	I0729 14:41:08.080594 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.080607 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:08.080615 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:08.080684 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:08.117054 1039759 cri.go:89] found id: ""
	I0729 14:41:08.117087 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.117098 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:08.117106 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:08.117175 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:08.152270 1039759 cri.go:89] found id: ""
	I0729 14:41:08.152295 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.152303 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:08.152309 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:08.152373 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:08.188804 1039759 cri.go:89] found id: ""
	I0729 14:41:08.188830 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.188842 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:08.188848 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:08.188903 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:08.225101 1039759 cri.go:89] found id: ""
	I0729 14:41:08.225139 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.225151 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:08.225164 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:08.225182 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:08.278721 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:08.278759 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:08.293417 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:08.293453 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:08.371802 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:08.371825 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:08.371843 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:08.452233 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:08.452274 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:06.795598 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:09.294006 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:08.175147 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:10.673937 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:09.707777 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:12.208180 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:10.993379 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:11.007599 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:11.007668 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:11.045603 1039759 cri.go:89] found id: ""
	I0729 14:41:11.045652 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.045675 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:11.045683 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:11.045746 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:11.079682 1039759 cri.go:89] found id: ""
	I0729 14:41:11.079711 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.079722 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:11.079730 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:11.079797 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:11.122138 1039759 cri.go:89] found id: ""
	I0729 14:41:11.122167 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.122180 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:11.122185 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:11.122249 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:11.157416 1039759 cri.go:89] found id: ""
	I0729 14:41:11.157444 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.157452 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:11.157458 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:11.157514 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:11.198589 1039759 cri.go:89] found id: ""
	I0729 14:41:11.198631 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.198643 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:11.198652 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:11.198725 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:11.238329 1039759 cri.go:89] found id: ""
	I0729 14:41:11.238360 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.238369 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:11.238376 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:11.238442 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:11.273283 1039759 cri.go:89] found id: ""
	I0729 14:41:11.273313 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.273322 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:11.273328 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:11.273382 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:11.313927 1039759 cri.go:89] found id: ""
	I0729 14:41:11.313972 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.313984 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:11.313997 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:11.314014 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:11.366507 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:11.366546 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:11.380529 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:11.380566 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:11.451839 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:11.451862 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:11.451882 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:11.537109 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:11.537150 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:11.294967 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:13.793738 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:13.173482 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:15.673025 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:14.706708 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:16.707135 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:14.104794 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:14.117474 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:14.117541 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:14.154117 1039759 cri.go:89] found id: ""
	I0729 14:41:14.154151 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.154163 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:14.154171 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:14.154236 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:14.195762 1039759 cri.go:89] found id: ""
	I0729 14:41:14.195793 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.195804 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:14.195812 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:14.195875 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:14.231434 1039759 cri.go:89] found id: ""
	I0729 14:41:14.231460 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.231467 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:14.231474 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:14.231523 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:14.264802 1039759 cri.go:89] found id: ""
	I0729 14:41:14.264839 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.264851 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:14.264859 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:14.264932 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:14.300162 1039759 cri.go:89] found id: ""
	I0729 14:41:14.300184 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.300194 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:14.300202 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:14.300262 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:14.335351 1039759 cri.go:89] found id: ""
	I0729 14:41:14.335385 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.335396 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:14.335404 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:14.335468 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:14.370064 1039759 cri.go:89] found id: ""
	I0729 14:41:14.370096 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.370107 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:14.370115 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:14.370184 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:14.406506 1039759 cri.go:89] found id: ""
	I0729 14:41:14.406538 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.406549 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:14.406562 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:14.406579 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:14.445641 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:14.445681 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:14.496132 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:14.496165 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:14.509732 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:14.509767 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:14.581519 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:14.581541 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:14.581558 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:17.164487 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:17.178359 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:17.178447 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:17.213780 1039759 cri.go:89] found id: ""
	I0729 14:41:17.213869 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.213887 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:17.213896 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:17.213966 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:17.251006 1039759 cri.go:89] found id: ""
	I0729 14:41:17.251045 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.251056 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:17.251063 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:17.251135 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:17.306624 1039759 cri.go:89] found id: ""
	I0729 14:41:17.306654 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.306683 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:17.306691 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:17.306775 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:17.358882 1039759 cri.go:89] found id: ""
	I0729 14:41:17.358915 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.358927 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:17.358935 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:17.359008 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:17.408592 1039759 cri.go:89] found id: ""
	I0729 14:41:17.408620 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.408636 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:17.408642 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:17.408705 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:17.445201 1039759 cri.go:89] found id: ""
	I0729 14:41:17.445228 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.445236 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:17.445242 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:17.445305 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:17.477441 1039759 cri.go:89] found id: ""
	I0729 14:41:17.477483 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.477511 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:17.477518 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:17.477591 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:17.509148 1039759 cri.go:89] found id: ""
	I0729 14:41:17.509179 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.509190 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:17.509203 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:17.509220 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:17.559784 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:17.559823 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:17.574163 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:17.574199 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:17.644249 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:17.644277 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:17.644294 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:17.720652 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:17.720688 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:16.293977 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:18.793489 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:20.793760 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:17.674099 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:20.173742 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:18.707238 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:21.209948 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:20.261591 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:20.274649 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:20.274731 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:20.311561 1039759 cri.go:89] found id: ""
	I0729 14:41:20.311591 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.311600 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:20.311606 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:20.311668 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:20.350267 1039759 cri.go:89] found id: ""
	I0729 14:41:20.350300 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.350313 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:20.350322 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:20.350379 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:20.384183 1039759 cri.go:89] found id: ""
	I0729 14:41:20.384213 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.384220 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:20.384227 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:20.384288 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:20.422330 1039759 cri.go:89] found id: ""
	I0729 14:41:20.422358 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.422367 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:20.422373 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:20.422442 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:20.465537 1039759 cri.go:89] found id: ""
	I0729 14:41:20.465568 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.465577 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:20.465586 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:20.465663 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:20.507661 1039759 cri.go:89] found id: ""
	I0729 14:41:20.507691 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.507701 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:20.507710 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:20.507774 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:20.545830 1039759 cri.go:89] found id: ""
	I0729 14:41:20.545857 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.545866 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:20.545872 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:20.545936 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:20.586311 1039759 cri.go:89] found id: ""
	I0729 14:41:20.586345 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.586354 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:20.586364 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:20.586379 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:20.635183 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:20.635224 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:20.649660 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:20.649701 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:20.729588 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:20.729613 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:20.729632 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:20.811565 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:20.811605 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:23.354318 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:23.367784 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:23.367862 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:23.401929 1039759 cri.go:89] found id: ""
	I0729 14:41:23.401956 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.401965 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:23.401970 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:23.402033 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:23.437130 1039759 cri.go:89] found id: ""
	I0729 14:41:23.437161 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.437185 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:23.437205 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:23.437267 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:23.474029 1039759 cri.go:89] found id: ""
	I0729 14:41:23.474066 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.474078 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:23.474087 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:23.474159 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:23.506678 1039759 cri.go:89] found id: ""
	I0729 14:41:23.506714 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.506725 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:23.506732 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:23.506791 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:23.541578 1039759 cri.go:89] found id: ""
	I0729 14:41:23.541618 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.541628 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:23.541636 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:23.541709 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:23.575852 1039759 cri.go:89] found id: ""
	I0729 14:41:23.575883 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.575891 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:23.575898 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:23.575955 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:23.610611 1039759 cri.go:89] found id: ""
	I0729 14:41:23.610638 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.610646 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:23.610653 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:23.610717 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:23.650403 1039759 cri.go:89] found id: ""
	I0729 14:41:23.650429 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.650438 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:23.650448 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:23.650460 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:23.701856 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:23.701899 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:23.716925 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:23.716958 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:23.790678 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:23.790699 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:23.790717 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:23.873204 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:23.873242 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:22.794021 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:25.294289 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:22.173787 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:24.673139 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:23.708892 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:26.207121 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:26.414319 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:26.428069 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:26.428152 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:26.462538 1039759 cri.go:89] found id: ""
	I0729 14:41:26.462578 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.462590 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:26.462599 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:26.462687 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:26.496461 1039759 cri.go:89] found id: ""
	I0729 14:41:26.496501 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.496513 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:26.496521 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:26.496593 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:26.534152 1039759 cri.go:89] found id: ""
	I0729 14:41:26.534190 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.534203 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:26.534210 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:26.534273 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:26.572986 1039759 cri.go:89] found id: ""
	I0729 14:41:26.573016 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.573024 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:26.573030 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:26.573097 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:26.607330 1039759 cri.go:89] found id: ""
	I0729 14:41:26.607359 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.607370 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:26.607378 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:26.607445 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:26.643023 1039759 cri.go:89] found id: ""
	I0729 14:41:26.643056 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.643067 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:26.643078 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:26.643145 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:26.679820 1039759 cri.go:89] found id: ""
	I0729 14:41:26.679846 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.679856 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:26.679865 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:26.679930 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:26.716433 1039759 cri.go:89] found id: ""
	I0729 14:41:26.716462 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.716470 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:26.716480 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:26.716494 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:26.794508 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:26.794529 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:26.794542 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:26.876663 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:26.876701 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:26.917309 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:26.917343 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:26.969397 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:26.969436 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:27.294711 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:29.793946 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:26.679220 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:29.173259 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:31.175213 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:28.207613 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:30.707297 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:29.483935 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:29.497502 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:29.497585 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:29.532671 1039759 cri.go:89] found id: ""
	I0729 14:41:29.532698 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.532712 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:29.532719 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:29.532784 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:29.568058 1039759 cri.go:89] found id: ""
	I0729 14:41:29.568085 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.568096 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:29.568103 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:29.568176 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:29.601173 1039759 cri.go:89] found id: ""
	I0729 14:41:29.601206 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.601216 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:29.601225 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:29.601284 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:29.634333 1039759 cri.go:89] found id: ""
	I0729 14:41:29.634372 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.634384 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:29.634393 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:29.634460 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:29.669669 1039759 cri.go:89] found id: ""
	I0729 14:41:29.669698 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.669706 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:29.669712 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:29.669777 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:29.702847 1039759 cri.go:89] found id: ""
	I0729 14:41:29.702876 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.702886 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:29.702894 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:29.702960 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:29.740713 1039759 cri.go:89] found id: ""
	I0729 14:41:29.740743 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.740754 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:29.740762 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:29.740846 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:29.777795 1039759 cri.go:89] found id: ""
	I0729 14:41:29.777829 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.777841 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:29.777853 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:29.777869 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:29.858713 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:29.858758 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:29.896873 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:29.896914 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:29.946905 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:29.946945 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:29.960136 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:29.960170 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:30.035951 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:32.536130 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:32.549431 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:32.549501 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:32.586069 1039759 cri.go:89] found id: ""
	I0729 14:41:32.586098 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.586117 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:32.586125 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:32.586183 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:32.623094 1039759 cri.go:89] found id: ""
	I0729 14:41:32.623123 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.623132 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:32.623138 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:32.623205 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:32.658370 1039759 cri.go:89] found id: ""
	I0729 14:41:32.658406 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.658418 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:32.658426 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:32.658492 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:32.696436 1039759 cri.go:89] found id: ""
	I0729 14:41:32.696469 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.696478 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:32.696484 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:32.696551 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:32.731306 1039759 cri.go:89] found id: ""
	I0729 14:41:32.731340 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.731352 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:32.731361 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:32.731431 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:32.767049 1039759 cri.go:89] found id: ""
	I0729 14:41:32.767087 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.767098 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:32.767106 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:32.767179 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:32.805094 1039759 cri.go:89] found id: ""
	I0729 14:41:32.805126 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.805138 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:32.805147 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:32.805223 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:32.840088 1039759 cri.go:89] found id: ""
	I0729 14:41:32.840116 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.840125 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:32.840137 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:32.840155 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:32.854065 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:32.854095 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:32.921447 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:32.921477 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:32.921493 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:33.005086 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:33.005129 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:33.042555 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:33.042617 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:31.795000 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:34.293349 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:33.673734 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:35.674275 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:32.707849 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:35.210238 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:35.593173 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:35.605965 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:35.606031 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:35.639315 1039759 cri.go:89] found id: ""
	I0729 14:41:35.639355 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.639367 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:35.639374 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:35.639466 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:35.678657 1039759 cri.go:89] found id: ""
	I0729 14:41:35.678686 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.678695 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:35.678700 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:35.678764 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:35.714108 1039759 cri.go:89] found id: ""
	I0729 14:41:35.714136 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.714147 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:35.714155 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:35.714220 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:35.748793 1039759 cri.go:89] found id: ""
	I0729 14:41:35.748820 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.748831 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:35.748837 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:35.748891 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:35.788853 1039759 cri.go:89] found id: ""
	I0729 14:41:35.788884 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.788895 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:35.788903 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:35.788971 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:35.825032 1039759 cri.go:89] found id: ""
	I0729 14:41:35.825059 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.825067 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:35.825074 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:35.825126 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:35.859990 1039759 cri.go:89] found id: ""
	I0729 14:41:35.860022 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.860033 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:35.860041 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:35.860131 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:35.894318 1039759 cri.go:89] found id: ""
	I0729 14:41:35.894352 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.894364 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:35.894377 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:35.894393 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:35.907591 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:35.907617 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:35.975000 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:35.975023 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:35.975040 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:36.056188 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:36.056226 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:36.094569 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:36.094606 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:38.648685 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:38.661546 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:38.661612 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:38.698658 1039759 cri.go:89] found id: ""
	I0729 14:41:38.698692 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.698704 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:38.698711 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:38.698797 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:38.731239 1039759 cri.go:89] found id: ""
	I0729 14:41:38.731274 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.731282 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:38.731288 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:38.731341 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:38.766549 1039759 cri.go:89] found id: ""
	I0729 14:41:38.766583 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.766594 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:38.766602 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:38.766663 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:38.803347 1039759 cri.go:89] found id: ""
	I0729 14:41:38.803374 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.803385 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:38.803393 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:38.803467 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:38.840327 1039759 cri.go:89] found id: ""
	I0729 14:41:38.840363 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.840374 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:38.840384 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:38.840480 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:38.874181 1039759 cri.go:89] found id: ""
	I0729 14:41:38.874211 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.874219 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:38.874225 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:38.874293 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:36.297301 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:38.794975 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:38.173718 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:40.675880 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:37.707171 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:39.709125 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:42.206569 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:38.908642 1039759 cri.go:89] found id: ""
	I0729 14:41:38.908674 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.908686 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:38.908694 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:38.908762 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:38.945081 1039759 cri.go:89] found id: ""
	I0729 14:41:38.945107 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.945116 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:38.945126 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:38.945140 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:38.999792 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:38.999826 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:39.013396 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:39.013421 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:39.077975 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:39.077998 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:39.078016 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:39.169606 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:39.169654 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:41.716258 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:41.730508 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:41.730579 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:41.766457 1039759 cri.go:89] found id: ""
	I0729 14:41:41.766490 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.766498 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:41.766505 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:41.766571 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:41.801073 1039759 cri.go:89] found id: ""
	I0729 14:41:41.801099 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.801109 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:41.801117 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:41.801178 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:41.836962 1039759 cri.go:89] found id: ""
	I0729 14:41:41.836986 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.836997 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:41.837005 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:41.837072 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:41.870169 1039759 cri.go:89] found id: ""
	I0729 14:41:41.870195 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.870205 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:41.870213 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:41.870274 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:41.902298 1039759 cri.go:89] found id: ""
	I0729 14:41:41.902323 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.902331 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:41.902337 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:41.902387 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:41.935394 1039759 cri.go:89] found id: ""
	I0729 14:41:41.935429 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.935441 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:41.935449 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:41.935513 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:41.972397 1039759 cri.go:89] found id: ""
	I0729 14:41:41.972437 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.972448 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:41.972456 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:41.972525 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:42.006477 1039759 cri.go:89] found id: ""
	I0729 14:41:42.006503 1039759 logs.go:276] 0 containers: []
	W0729 14:41:42.006513 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:42.006526 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:42.006540 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:42.053853 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:42.053886 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:42.067143 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:42.067172 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:42.135406 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:42.135432 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:42.135449 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:42.212571 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:42.212603 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:41.293241 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:43.294160 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:45.793697 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:43.173087 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:45.174327 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:44.206854 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:46.707167 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:44.751283 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:44.764600 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:44.764688 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:44.800821 1039759 cri.go:89] found id: ""
	I0729 14:41:44.800850 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.800857 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:44.800863 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:44.800924 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:44.834638 1039759 cri.go:89] found id: ""
	I0729 14:41:44.834670 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.834680 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:44.834686 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:44.834744 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:44.870198 1039759 cri.go:89] found id: ""
	I0729 14:41:44.870225 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.870237 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:44.870245 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:44.870312 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:44.904588 1039759 cri.go:89] found id: ""
	I0729 14:41:44.904620 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.904631 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:44.904639 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:44.904713 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:44.939442 1039759 cri.go:89] found id: ""
	I0729 14:41:44.939467 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.939474 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:44.939480 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:44.939541 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:44.972771 1039759 cri.go:89] found id: ""
	I0729 14:41:44.972799 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.972808 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:44.972815 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:44.972888 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:45.007513 1039759 cri.go:89] found id: ""
	I0729 14:41:45.007540 1039759 logs.go:276] 0 containers: []
	W0729 14:41:45.007549 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:45.007557 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:45.007626 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:45.038752 1039759 cri.go:89] found id: ""
	I0729 14:41:45.038778 1039759 logs.go:276] 0 containers: []
	W0729 14:41:45.038787 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:45.038797 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:45.038821 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:45.089807 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:45.089838 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:45.103188 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:45.103221 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:45.174509 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:45.174532 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:45.174554 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:45.255288 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:45.255327 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:47.799207 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:47.814781 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:47.814866 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:47.855111 1039759 cri.go:89] found id: ""
	I0729 14:41:47.855143 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.855156 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:47.855164 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:47.855230 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:47.892542 1039759 cri.go:89] found id: ""
	I0729 14:41:47.892577 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.892589 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:47.892603 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:47.892674 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:47.933408 1039759 cri.go:89] found id: ""
	I0729 14:41:47.933439 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.933451 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:47.933458 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:47.933531 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:47.970397 1039759 cri.go:89] found id: ""
	I0729 14:41:47.970427 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.970439 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:47.970447 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:47.970514 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:48.006852 1039759 cri.go:89] found id: ""
	I0729 14:41:48.006880 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.006891 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:48.006899 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:48.006967 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:48.046766 1039759 cri.go:89] found id: ""
	I0729 14:41:48.046799 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.046811 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:48.046820 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:48.046893 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:48.084354 1039759 cri.go:89] found id: ""
	I0729 14:41:48.084380 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.084387 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:48.084393 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:48.084468 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:48.121526 1039759 cri.go:89] found id: ""
	I0729 14:41:48.121559 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.121571 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:48.121582 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:48.121606 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:48.136753 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:48.136784 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:48.206914 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:48.206942 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:48.206958 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:48.283843 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:48.283882 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:48.325845 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:48.325878 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:47.794096 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:50.295275 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:47.182903 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:49.672827 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:49.206572 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:51.206900 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:50.881346 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:50.894098 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:50.894177 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:50.927345 1039759 cri.go:89] found id: ""
	I0729 14:41:50.927375 1039759 logs.go:276] 0 containers: []
	W0729 14:41:50.927386 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:50.927399 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:50.927466 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:50.962700 1039759 cri.go:89] found id: ""
	I0729 14:41:50.962726 1039759 logs.go:276] 0 containers: []
	W0729 14:41:50.962734 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:50.962740 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:50.962804 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:50.997299 1039759 cri.go:89] found id: ""
	I0729 14:41:50.997334 1039759 logs.go:276] 0 containers: []
	W0729 14:41:50.997346 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:50.997354 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:50.997419 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:51.030157 1039759 cri.go:89] found id: ""
	I0729 14:41:51.030190 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.030202 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:51.030211 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:51.030288 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:51.063123 1039759 cri.go:89] found id: ""
	I0729 14:41:51.063151 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.063162 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:51.063170 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:51.063237 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:51.096772 1039759 cri.go:89] found id: ""
	I0729 14:41:51.096819 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.096830 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:51.096838 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:51.096912 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:51.131976 1039759 cri.go:89] found id: ""
	I0729 14:41:51.132004 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.132014 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:51.132022 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:51.132095 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:51.167560 1039759 cri.go:89] found id: ""
	I0729 14:41:51.167599 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.167610 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:51.167622 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:51.167640 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:51.229416 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:51.229455 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:51.243576 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:51.243604 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:51.311103 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:51.311123 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:51.311139 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:51.396369 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:51.396432 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:52.793981 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:55.294172 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:51.673945 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:54.173681 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:56.174098 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:53.207656 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:55.709310 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:53.942329 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:53.955960 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:53.956027 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:53.988039 1039759 cri.go:89] found id: ""
	I0729 14:41:53.988074 1039759 logs.go:276] 0 containers: []
	W0729 14:41:53.988085 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:53.988094 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:53.988162 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:54.020948 1039759 cri.go:89] found id: ""
	I0729 14:41:54.020981 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.020992 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:54.020999 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:54.021067 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:54.053716 1039759 cri.go:89] found id: ""
	I0729 14:41:54.053744 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.053752 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:54.053759 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:54.053811 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:54.092348 1039759 cri.go:89] found id: ""
	I0729 14:41:54.092378 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.092390 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:54.092398 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:54.092471 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:54.126114 1039759 cri.go:89] found id: ""
	I0729 14:41:54.126176 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.126189 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:54.126199 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:54.126316 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:54.162125 1039759 cri.go:89] found id: ""
	I0729 14:41:54.162157 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.162167 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:54.162174 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:54.162241 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:54.202407 1039759 cri.go:89] found id: ""
	I0729 14:41:54.202439 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.202448 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:54.202456 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:54.202522 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:54.238650 1039759 cri.go:89] found id: ""
	I0729 14:41:54.238684 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.238695 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:54.238704 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:54.238718 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:54.291200 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:54.291243 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:54.306381 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:54.306415 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:54.371355 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:54.371384 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:54.371399 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:54.455200 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:54.455237 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:56.994689 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:57.007893 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:57.007958 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:57.041775 1039759 cri.go:89] found id: ""
	I0729 14:41:57.041808 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.041820 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:57.041828 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:57.041894 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:57.075409 1039759 cri.go:89] found id: ""
	I0729 14:41:57.075442 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.075454 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:57.075462 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:57.075524 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:57.120963 1039759 cri.go:89] found id: ""
	I0729 14:41:57.121000 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.121011 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:57.121019 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:57.121088 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:57.164882 1039759 cri.go:89] found id: ""
	I0729 14:41:57.164912 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.164923 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:57.164932 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:57.165001 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:57.198511 1039759 cri.go:89] found id: ""
	I0729 14:41:57.198537 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.198545 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:57.198550 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:57.198604 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:57.238516 1039759 cri.go:89] found id: ""
	I0729 14:41:57.238544 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.238552 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:57.238559 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:57.238622 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:57.271823 1039759 cri.go:89] found id: ""
	I0729 14:41:57.271854 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.271865 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:57.271873 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:57.271937 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:57.308435 1039759 cri.go:89] found id: ""
	I0729 14:41:57.308460 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.308472 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:57.308483 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:57.308506 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:57.359783 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:57.359818 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:57.372669 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:57.372698 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:57.440979 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:57.441004 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:57.441018 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:57.520105 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:57.520139 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:57.295421 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:59.793704 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:58.673850 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:01.172547 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:58.207493 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:00.208108 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:02.208334 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:00.060542 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:00.076125 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:00.076192 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:00.113095 1039759 cri.go:89] found id: ""
	I0729 14:42:00.113129 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.113137 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:00.113150 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:00.113206 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:00.154104 1039759 cri.go:89] found id: ""
	I0729 14:42:00.154132 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.154139 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:00.154146 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:00.154202 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:00.190416 1039759 cri.go:89] found id: ""
	I0729 14:42:00.190443 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.190454 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:00.190462 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:00.190532 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:00.228138 1039759 cri.go:89] found id: ""
	I0729 14:42:00.228173 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.228185 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:00.228192 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:00.228261 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:00.265679 1039759 cri.go:89] found id: ""
	I0729 14:42:00.265706 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.265715 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:00.265721 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:00.265787 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:00.300283 1039759 cri.go:89] found id: ""
	I0729 14:42:00.300315 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.300333 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:00.300341 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:00.300433 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:00.339224 1039759 cri.go:89] found id: ""
	I0729 14:42:00.339255 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.339264 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:00.339270 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:00.339333 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:00.375780 1039759 cri.go:89] found id: ""
	I0729 14:42:00.375815 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.375826 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:00.375836 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:00.375851 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:00.425145 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:00.425190 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:00.438860 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:00.438891 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:00.512668 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:00.512695 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:00.512714 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:00.597083 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:00.597139 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:03.141962 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:03.156295 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:03.156372 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:03.192860 1039759 cri.go:89] found id: ""
	I0729 14:42:03.192891 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.192902 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:03.192911 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:03.192982 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:03.234078 1039759 cri.go:89] found id: ""
	I0729 14:42:03.234104 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.234113 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:03.234119 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:03.234171 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:03.268099 1039759 cri.go:89] found id: ""
	I0729 14:42:03.268124 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.268131 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:03.268138 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:03.268197 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:03.306470 1039759 cri.go:89] found id: ""
	I0729 14:42:03.306498 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.306507 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:03.306513 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:03.306596 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:03.341902 1039759 cri.go:89] found id: ""
	I0729 14:42:03.341933 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.341944 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:03.341952 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:03.342019 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:03.377235 1039759 cri.go:89] found id: ""
	I0729 14:42:03.377271 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.377282 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:03.377291 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:03.377355 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:03.411273 1039759 cri.go:89] found id: ""
	I0729 14:42:03.411308 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.411316 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:03.411322 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:03.411397 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:03.446482 1039759 cri.go:89] found id: ""
	I0729 14:42:03.446511 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.446519 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:03.446530 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:03.446545 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:03.460222 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:03.460262 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:03.548149 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:03.548175 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:03.548191 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:03.640563 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:03.640608 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:03.681685 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:03.681713 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:02.293412 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:04.793239 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:03.174082 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:05.674438 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:04.706798 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:06.707818 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:06.234967 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:06.249656 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:06.249726 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:06.284768 1039759 cri.go:89] found id: ""
	I0729 14:42:06.284798 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.284810 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:06.284822 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:06.284880 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:06.321109 1039759 cri.go:89] found id: ""
	I0729 14:42:06.321140 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.321150 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:06.321158 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:06.321229 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:06.357238 1039759 cri.go:89] found id: ""
	I0729 14:42:06.357269 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.357278 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:06.357284 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:06.357342 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:06.391613 1039759 cri.go:89] found id: ""
	I0729 14:42:06.391643 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.391653 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:06.391661 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:06.391726 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:06.428782 1039759 cri.go:89] found id: ""
	I0729 14:42:06.428813 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.428823 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:06.428831 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:06.428890 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:06.463558 1039759 cri.go:89] found id: ""
	I0729 14:42:06.463596 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.463607 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:06.463615 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:06.463683 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:06.500442 1039759 cri.go:89] found id: ""
	I0729 14:42:06.500474 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.500484 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:06.500501 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:06.500579 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:06.535589 1039759 cri.go:89] found id: ""
	I0729 14:42:06.535627 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.535638 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:06.535650 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:06.535668 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:06.584641 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:06.584676 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:06.597702 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:06.597737 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:06.664499 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:06.664537 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:06.664555 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:06.744808 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:06.744845 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:06.793853 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:09.294853 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:08.172993 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:10.174863 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:08.707874 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:11.209387 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:09.286151 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:09.307822 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:09.307892 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:09.369334 1039759 cri.go:89] found id: ""
	I0729 14:42:09.369363 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.369373 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:09.369381 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:09.369458 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:09.402302 1039759 cri.go:89] found id: ""
	I0729 14:42:09.402334 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.402345 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:09.402353 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:09.402423 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:09.436351 1039759 cri.go:89] found id: ""
	I0729 14:42:09.436380 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.436402 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:09.436429 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:09.436501 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:09.467735 1039759 cri.go:89] found id: ""
	I0729 14:42:09.467768 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.467780 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:09.467788 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:09.467849 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:09.503328 1039759 cri.go:89] found id: ""
	I0729 14:42:09.503355 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.503367 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:09.503376 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:09.503438 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:09.540012 1039759 cri.go:89] found id: ""
	I0729 14:42:09.540039 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.540047 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:09.540053 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:09.540106 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:09.576737 1039759 cri.go:89] found id: ""
	I0729 14:42:09.576801 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.576814 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:09.576822 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:09.576920 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:09.614624 1039759 cri.go:89] found id: ""
	I0729 14:42:09.614651 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.614659 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:09.614669 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:09.614684 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:09.650533 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:09.650580 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:09.709144 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:09.709175 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:09.724147 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:09.724173 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:09.790737 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:09.790760 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:09.790775 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:12.376968 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:12.390344 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:12.390409 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:12.424820 1039759 cri.go:89] found id: ""
	I0729 14:42:12.424849 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.424860 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:12.424876 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:12.424943 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:12.457444 1039759 cri.go:89] found id: ""
	I0729 14:42:12.457480 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.457492 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:12.457500 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:12.457561 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:12.490027 1039759 cri.go:89] found id: ""
	I0729 14:42:12.490058 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.490069 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:12.490077 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:12.490145 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:12.523229 1039759 cri.go:89] found id: ""
	I0729 14:42:12.523256 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.523265 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:12.523270 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:12.523321 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:12.557849 1039759 cri.go:89] found id: ""
	I0729 14:42:12.557875 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.557885 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:12.557891 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:12.557951 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:12.592943 1039759 cri.go:89] found id: ""
	I0729 14:42:12.592973 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.592982 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:12.592989 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:12.593059 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:12.626495 1039759 cri.go:89] found id: ""
	I0729 14:42:12.626531 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.626539 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:12.626557 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:12.626641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:12.663764 1039759 cri.go:89] found id: ""
	I0729 14:42:12.663793 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.663805 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:12.663818 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:12.663835 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:12.722521 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:12.722556 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:12.736476 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:12.736505 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:12.809582 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:12.809617 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:12.809637 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:12.890665 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:12.890712 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:11.793144 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:13.793447 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:15.794630 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:12.673257 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:15.173702 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:13.707929 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:15.707964 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:15.429702 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:15.443258 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:15.443340 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:15.477170 1039759 cri.go:89] found id: ""
	I0729 14:42:15.477198 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.477207 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:15.477212 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:15.477266 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:15.511614 1039759 cri.go:89] found id: ""
	I0729 14:42:15.511652 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.511665 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:15.511671 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:15.511739 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:15.548472 1039759 cri.go:89] found id: ""
	I0729 14:42:15.548501 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.548511 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:15.548519 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:15.548590 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:15.589060 1039759 cri.go:89] found id: ""
	I0729 14:42:15.589090 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.589102 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:15.589110 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:15.589185 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:15.622846 1039759 cri.go:89] found id: ""
	I0729 14:42:15.622873 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.622882 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:15.622887 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:15.622943 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:15.656193 1039759 cri.go:89] found id: ""
	I0729 14:42:15.656220 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.656229 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:15.656237 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:15.656307 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:15.691301 1039759 cri.go:89] found id: ""
	I0729 14:42:15.691336 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.691348 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:15.691357 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:15.691420 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:15.729923 1039759 cri.go:89] found id: ""
	I0729 14:42:15.729963 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.729974 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:15.729988 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:15.730004 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:15.783531 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:15.783569 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:15.799590 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:15.799619 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:15.874849 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:15.874886 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:15.874901 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:15.957384 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:15.957424 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:18.497035 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:18.511538 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:18.511616 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:18.550512 1039759 cri.go:89] found id: ""
	I0729 14:42:18.550552 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.550573 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:18.550582 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:18.550642 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:18.585910 1039759 cri.go:89] found id: ""
	I0729 14:42:18.585942 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.585954 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:18.585962 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:18.586031 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:18.619680 1039759 cri.go:89] found id: ""
	I0729 14:42:18.619712 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.619722 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:18.619730 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:18.619799 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:18.651559 1039759 cri.go:89] found id: ""
	I0729 14:42:18.651592 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.651604 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:18.651613 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:18.651688 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:18.686668 1039759 cri.go:89] found id: ""
	I0729 14:42:18.686693 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.686701 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:18.686711 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:18.686764 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:18.722832 1039759 cri.go:89] found id: ""
	I0729 14:42:18.722859 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.722869 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:18.722876 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:18.722927 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:18.758261 1039759 cri.go:89] found id: ""
	I0729 14:42:18.758289 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.758302 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:18.758310 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:18.758378 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:18.795190 1039759 cri.go:89] found id: ""
	I0729 14:42:18.795216 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.795227 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:18.795237 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:18.795251 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:18.835331 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:18.835366 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:17.796916 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:20.294082 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:17.673000 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:19.674010 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:18.209178 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:20.707421 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:18.889707 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:18.889745 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:18.902477 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:18.902503 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:18.970712 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:18.970735 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:18.970748 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:21.552092 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:21.566581 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:21.566669 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:21.600230 1039759 cri.go:89] found id: ""
	I0729 14:42:21.600261 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.600275 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:21.600283 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:21.600346 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:21.636576 1039759 cri.go:89] found id: ""
	I0729 14:42:21.636616 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.636627 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:21.636635 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:21.636705 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:21.672944 1039759 cri.go:89] found id: ""
	I0729 14:42:21.672973 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.672984 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:21.672997 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:21.673063 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:21.708555 1039759 cri.go:89] found id: ""
	I0729 14:42:21.708582 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.708601 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:21.708613 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:21.708673 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:21.744862 1039759 cri.go:89] found id: ""
	I0729 14:42:21.744891 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.744902 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:21.744908 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:21.744973 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:21.779084 1039759 cri.go:89] found id: ""
	I0729 14:42:21.779111 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.779119 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:21.779126 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:21.779183 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:21.819931 1039759 cri.go:89] found id: ""
	I0729 14:42:21.819972 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.819981 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:21.819989 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:21.820047 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:21.855472 1039759 cri.go:89] found id: ""
	I0729 14:42:21.855500 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.855509 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:21.855522 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:21.855539 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:21.925561 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:21.925579 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:21.925596 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:22.015986 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:22.016032 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:22.059898 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:22.059935 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:22.129018 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:22.129055 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:21.787886 1039263 pod_ready.go:81] duration metric: took 4m0.000465481s for pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace to be "Ready" ...
	E0729 14:42:21.787929 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0729 14:42:21.787945 1039263 pod_ready.go:38] duration metric: took 4m5.237036546s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:42:21.787973 1039263 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:42:21.788025 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:21.788089 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:21.857594 1039263 cri.go:89] found id: "0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:21.857613 1039263 cri.go:89] found id: ""
	I0729 14:42:21.857620 1039263 logs.go:276] 1 containers: [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8]
	I0729 14:42:21.857674 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:21.862462 1039263 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:21.862523 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:21.903562 1039263 cri.go:89] found id: "759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:21.903594 1039263 cri.go:89] found id: ""
	I0729 14:42:21.903604 1039263 logs.go:276] 1 containers: [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1]
	I0729 14:42:21.903660 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:21.908232 1039263 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:21.908327 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:21.947632 1039263 cri.go:89] found id: "cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:21.947663 1039263 cri.go:89] found id: ""
	I0729 14:42:21.947674 1039263 logs.go:276] 1 containers: [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d]
	I0729 14:42:21.947737 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:21.952576 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:21.952649 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:21.995318 1039263 cri.go:89] found id: "ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:21.995343 1039263 cri.go:89] found id: ""
	I0729 14:42:21.995351 1039263 logs.go:276] 1 containers: [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40]
	I0729 14:42:21.995418 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.000352 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:22.000440 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:22.040544 1039263 cri.go:89] found id: "1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:22.040572 1039263 cri.go:89] found id: ""
	I0729 14:42:22.040582 1039263 logs.go:276] 1 containers: [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b]
	I0729 14:42:22.040648 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.044840 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:22.044910 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:22.090787 1039263 cri.go:89] found id: "d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:22.090816 1039263 cri.go:89] found id: ""
	I0729 14:42:22.090827 1039263 logs.go:276] 1 containers: [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322]
	I0729 14:42:22.090897 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.096748 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:22.096826 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:22.143491 1039263 cri.go:89] found id: ""
	I0729 14:42:22.143522 1039263 logs.go:276] 0 containers: []
	W0729 14:42:22.143534 1039263 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:22.143541 1039263 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 14:42:22.143609 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 14:42:22.179378 1039263 cri.go:89] found id: "bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:22.179404 1039263 cri.go:89] found id: "40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:22.179409 1039263 cri.go:89] found id: ""
	I0729 14:42:22.179419 1039263 logs.go:276] 2 containers: [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4]
	I0729 14:42:22.179482 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.184686 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.189009 1039263 logs.go:123] Gathering logs for etcd [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1] ...
	I0729 14:42:22.189029 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:22.250475 1039263 logs.go:123] Gathering logs for kube-scheduler [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40] ...
	I0729 14:42:22.250510 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:22.286581 1039263 logs.go:123] Gathering logs for kube-proxy [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b] ...
	I0729 14:42:22.286622 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:22.325541 1039263 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:22.325570 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:22.831822 1039263 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:22.831875 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:22.846540 1039263 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:22.846588 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 14:42:22.970758 1039263 logs.go:123] Gathering logs for coredns [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d] ...
	I0729 14:42:22.970796 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:23.013428 1039263 logs.go:123] Gathering logs for kube-controller-manager [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322] ...
	I0729 14:42:23.013467 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:23.064784 1039263 logs.go:123] Gathering logs for storage-provisioner [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a] ...
	I0729 14:42:23.064820 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:23.111615 1039263 logs.go:123] Gathering logs for storage-provisioner [40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4] ...
	I0729 14:42:23.111653 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:23.151296 1039263 logs.go:123] Gathering logs for container status ...
	I0729 14:42:23.151328 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:23.198650 1039263 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:23.198692 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:23.259196 1039263 logs.go:123] Gathering logs for kube-apiserver [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8] ...
	I0729 14:42:23.259247 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:25.808980 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:25.829180 1039263 api_server.go:72] duration metric: took 4m16.997740137s to wait for apiserver process to appear ...
	I0729 14:42:25.829211 1039263 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:42:25.829260 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:25.829335 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:25.875138 1039263 cri.go:89] found id: "0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:25.875167 1039263 cri.go:89] found id: ""
	I0729 14:42:25.875175 1039263 logs.go:276] 1 containers: [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8]
	I0729 14:42:25.875230 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:25.879855 1039263 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:25.879937 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:25.916938 1039263 cri.go:89] found id: "759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:25.916964 1039263 cri.go:89] found id: ""
	I0729 14:42:25.916974 1039263 logs.go:276] 1 containers: [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1]
	I0729 14:42:25.917036 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:25.921166 1039263 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:25.921224 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:25.958196 1039263 cri.go:89] found id: "cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:25.958224 1039263 cri.go:89] found id: ""
	I0729 14:42:25.958234 1039263 logs.go:276] 1 containers: [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d]
	I0729 14:42:25.958300 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:25.962697 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:25.962760 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:26.000162 1039263 cri.go:89] found id: "ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:26.000195 1039263 cri.go:89] found id: ""
	I0729 14:42:26.000206 1039263 logs.go:276] 1 containers: [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40]
	I0729 14:42:26.000277 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.004518 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:26.004594 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:26.041099 1039263 cri.go:89] found id: "1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:26.041133 1039263 cri.go:89] found id: ""
	I0729 14:42:26.041144 1039263 logs.go:276] 1 containers: [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b]
	I0729 14:42:26.041208 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.045334 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:26.045412 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:26.082783 1039263 cri.go:89] found id: "d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:26.082815 1039263 cri.go:89] found id: ""
	I0729 14:42:26.082826 1039263 logs.go:276] 1 containers: [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322]
	I0729 14:42:26.082901 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.086996 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:26.087063 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:26.123636 1039263 cri.go:89] found id: ""
	I0729 14:42:26.123677 1039263 logs.go:276] 0 containers: []
	W0729 14:42:26.123688 1039263 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:26.123694 1039263 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 14:42:26.123756 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 14:42:26.163819 1039263 cri.go:89] found id: "bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:26.163849 1039263 cri.go:89] found id: "40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:26.163855 1039263 cri.go:89] found id: ""
	I0729 14:42:26.163864 1039263 logs.go:276] 2 containers: [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4]
	I0729 14:42:26.163929 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.168611 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.173125 1039263 logs.go:123] Gathering logs for kube-scheduler [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40] ...
	I0729 14:42:26.173155 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:22.173593 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:24.173621 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:22.708101 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:25.206661 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:27.207926 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:24.645474 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:24.658107 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:24.658171 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:24.696604 1039759 cri.go:89] found id: ""
	I0729 14:42:24.696635 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.696645 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:24.696653 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:24.696725 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:24.733862 1039759 cri.go:89] found id: ""
	I0729 14:42:24.733887 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.733894 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:24.733901 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:24.733957 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:24.770614 1039759 cri.go:89] found id: ""
	I0729 14:42:24.770644 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.770656 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:24.770664 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:24.770734 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:24.806368 1039759 cri.go:89] found id: ""
	I0729 14:42:24.806394 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.806403 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:24.806408 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:24.806470 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:24.838490 1039759 cri.go:89] found id: ""
	I0729 14:42:24.838526 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.838534 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:24.838541 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:24.838596 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:24.871017 1039759 cri.go:89] found id: ""
	I0729 14:42:24.871043 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.871051 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:24.871057 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:24.871128 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:24.903281 1039759 cri.go:89] found id: ""
	I0729 14:42:24.903311 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.903322 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:24.903330 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:24.903403 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:24.937245 1039759 cri.go:89] found id: ""
	I0729 14:42:24.937279 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.937291 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:24.937304 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:24.937319 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:24.989518 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:24.989551 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:25.005021 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:25.005055 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:25.080849 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:25.080877 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:25.080893 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:25.163742 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:25.163784 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:27.706182 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:27.719350 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:27.719425 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:27.756955 1039759 cri.go:89] found id: ""
	I0729 14:42:27.756982 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.756990 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:27.756997 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:27.757054 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:27.791975 1039759 cri.go:89] found id: ""
	I0729 14:42:27.792014 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.792025 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:27.792033 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:27.792095 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:27.834188 1039759 cri.go:89] found id: ""
	I0729 14:42:27.834215 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.834223 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:27.834230 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:27.834296 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:27.867798 1039759 cri.go:89] found id: ""
	I0729 14:42:27.867834 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.867843 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:27.867851 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:27.867918 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:27.900316 1039759 cri.go:89] found id: ""
	I0729 14:42:27.900343 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.900351 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:27.900357 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:27.900422 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:27.932361 1039759 cri.go:89] found id: ""
	I0729 14:42:27.932391 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.932402 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:27.932425 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:27.932493 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:27.965530 1039759 cri.go:89] found id: ""
	I0729 14:42:27.965562 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.965573 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:27.965581 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:27.965651 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:27.999582 1039759 cri.go:89] found id: ""
	I0729 14:42:27.999608 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.999617 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:27.999626 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:27.999654 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:28.069415 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:28.069438 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:28.069454 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:28.149781 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:28.149821 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:28.190045 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:28.190072 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:28.244147 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:28.244188 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:26.217755 1039263 logs.go:123] Gathering logs for storage-provisioner [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a] ...
	I0729 14:42:26.217796 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:26.257363 1039263 logs.go:123] Gathering logs for storage-provisioner [40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4] ...
	I0729 14:42:26.257399 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:26.297502 1039263 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:26.297534 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:26.729336 1039263 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:26.729370 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:26.779172 1039263 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:26.779213 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:26.794369 1039263 logs.go:123] Gathering logs for etcd [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1] ...
	I0729 14:42:26.794399 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:26.857964 1039263 logs.go:123] Gathering logs for coredns [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d] ...
	I0729 14:42:26.858000 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:26.895052 1039263 logs.go:123] Gathering logs for container status ...
	I0729 14:42:26.895083 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:26.936360 1039263 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:26.936395 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 14:42:27.037118 1039263 logs.go:123] Gathering logs for kube-apiserver [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8] ...
	I0729 14:42:27.037160 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:27.089764 1039263 logs.go:123] Gathering logs for kube-proxy [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b] ...
	I0729 14:42:27.089798 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:27.134009 1039263 logs.go:123] Gathering logs for kube-controller-manager [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322] ...
	I0729 14:42:27.134042 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:29.690960 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:42:29.696457 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 200:
	ok
	I0729 14:42:29.697313 1039263 api_server.go:141] control plane version: v1.30.3
	I0729 14:42:29.697335 1039263 api_server.go:131] duration metric: took 3.868117139s to wait for apiserver health ...
	I0729 14:42:29.697343 1039263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:42:29.697370 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:29.697430 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:29.740594 1039263 cri.go:89] found id: "0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:29.740623 1039263 cri.go:89] found id: ""
	I0729 14:42:29.740633 1039263 logs.go:276] 1 containers: [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8]
	I0729 14:42:29.740696 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.745183 1039263 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:29.745257 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:29.780091 1039263 cri.go:89] found id: "759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:29.780112 1039263 cri.go:89] found id: ""
	I0729 14:42:29.780119 1039263 logs.go:276] 1 containers: [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1]
	I0729 14:42:29.780178 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.784241 1039263 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:29.784305 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:29.825618 1039263 cri.go:89] found id: "cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:29.825641 1039263 cri.go:89] found id: ""
	I0729 14:42:29.825649 1039263 logs.go:276] 1 containers: [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d]
	I0729 14:42:29.825715 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.830291 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:29.830351 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:29.866651 1039263 cri.go:89] found id: "ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:29.866685 1039263 cri.go:89] found id: ""
	I0729 14:42:29.866695 1039263 logs.go:276] 1 containers: [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40]
	I0729 14:42:29.866758 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.871440 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:29.871494 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:29.911944 1039263 cri.go:89] found id: "1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:29.911968 1039263 cri.go:89] found id: ""
	I0729 14:42:29.911976 1039263 logs.go:276] 1 containers: [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b]
	I0729 14:42:29.912037 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.916604 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:29.916680 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:29.954334 1039263 cri.go:89] found id: "d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:29.954361 1039263 cri.go:89] found id: ""
	I0729 14:42:29.954371 1039263 logs.go:276] 1 containers: [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322]
	I0729 14:42:29.954446 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.959051 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:29.959130 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:29.996760 1039263 cri.go:89] found id: ""
	I0729 14:42:29.996795 1039263 logs.go:276] 0 containers: []
	W0729 14:42:29.996804 1039263 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:29.996812 1039263 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 14:42:29.996883 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 14:42:30.034562 1039263 cri.go:89] found id: "bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:30.034598 1039263 cri.go:89] found id: "40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:30.034604 1039263 cri.go:89] found id: ""
	I0729 14:42:30.034614 1039263 logs.go:276] 2 containers: [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4]
	I0729 14:42:30.034682 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:30.039588 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:30.043866 1039263 logs.go:123] Gathering logs for kube-apiserver [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8] ...
	I0729 14:42:30.043889 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:30.091309 1039263 logs.go:123] Gathering logs for etcd [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1] ...
	I0729 14:42:30.091349 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:30.149888 1039263 logs.go:123] Gathering logs for kube-scheduler [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40] ...
	I0729 14:42:30.149926 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:30.189441 1039263 logs.go:123] Gathering logs for kube-controller-manager [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322] ...
	I0729 14:42:30.189479 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:30.250850 1039263 logs.go:123] Gathering logs for storage-provisioner [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a] ...
	I0729 14:42:30.250890 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:30.290077 1039263 logs.go:123] Gathering logs for storage-provisioner [40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4] ...
	I0729 14:42:30.290111 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:30.329035 1039263 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:30.329068 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:30.383068 1039263 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:30.383113 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 14:42:30.497009 1039263 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:30.497045 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:30.914489 1039263 logs.go:123] Gathering logs for container status ...
	I0729 14:42:30.914534 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:30.972901 1039263 logs.go:123] Gathering logs for kube-proxy [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b] ...
	I0729 14:42:30.972951 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:31.021798 1039263 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:31.021838 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:31.040147 1039263 logs.go:123] Gathering logs for coredns [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d] ...
	I0729 14:42:31.040182 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:26.674294 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:29.173375 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:31.173588 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:29.710051 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:32.209382 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:33.593681 1039263 system_pods.go:59] 8 kube-system pods found
	I0729 14:42:33.593711 1039263 system_pods.go:61] "coredns-7db6d8ff4d-6dhzz" [c680e565-fe93-4072-8fe8-6fd440ae5675] Running
	I0729 14:42:33.593716 1039263 system_pods.go:61] "etcd-embed-certs-668123" [3244d6a8-3aa2-406a-86fe-9770f5b8541a] Running
	I0729 14:42:33.593719 1039263 system_pods.go:61] "kube-apiserver-embed-certs-668123" [a00570e4-b496-4083-b280-4125643e475e] Running
	I0729 14:42:33.593723 1039263 system_pods.go:61] "kube-controller-manager-embed-certs-668123" [cec685e1-4d5f-4210-a115-e3766c962f07] Running
	I0729 14:42:33.593725 1039263 system_pods.go:61] "kube-proxy-2v79q" [e43e850d-b94e-467c-bf0f-0eac3828f54f] Running
	I0729 14:42:33.593728 1039263 system_pods.go:61] "kube-scheduler-embed-certs-668123" [4037d948-faed-49c9-b321-6a4be51b9ea9] Running
	I0729 14:42:33.593733 1039263 system_pods.go:61] "metrics-server-569cc877fc-5msnp" [eb9cd6f7-caf5-4b18-b0d6-0f01add839ce] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:42:33.593736 1039263 system_pods.go:61] "storage-provisioner" [ecdab0df-406c-4f3c-b8fe-34a48b7f1e0a] Running
	I0729 14:42:33.593744 1039263 system_pods.go:74] duration metric: took 3.896394577s to wait for pod list to return data ...
	I0729 14:42:33.593751 1039263 default_sa.go:34] waiting for default service account to be created ...
	I0729 14:42:33.596176 1039263 default_sa.go:45] found service account: "default"
	I0729 14:42:33.596197 1039263 default_sa.go:55] duration metric: took 2.440561ms for default service account to be created ...
	I0729 14:42:33.596205 1039263 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 14:42:33.601830 1039263 system_pods.go:86] 8 kube-system pods found
	I0729 14:42:33.601855 1039263 system_pods.go:89] "coredns-7db6d8ff4d-6dhzz" [c680e565-fe93-4072-8fe8-6fd440ae5675] Running
	I0729 14:42:33.601861 1039263 system_pods.go:89] "etcd-embed-certs-668123" [3244d6a8-3aa2-406a-86fe-9770f5b8541a] Running
	I0729 14:42:33.601866 1039263 system_pods.go:89] "kube-apiserver-embed-certs-668123" [a00570e4-b496-4083-b280-4125643e475e] Running
	I0729 14:42:33.601871 1039263 system_pods.go:89] "kube-controller-manager-embed-certs-668123" [cec685e1-4d5f-4210-a115-e3766c962f07] Running
	I0729 14:42:33.601878 1039263 system_pods.go:89] "kube-proxy-2v79q" [e43e850d-b94e-467c-bf0f-0eac3828f54f] Running
	I0729 14:42:33.601887 1039263 system_pods.go:89] "kube-scheduler-embed-certs-668123" [4037d948-faed-49c9-b321-6a4be51b9ea9] Running
	I0729 14:42:33.601897 1039263 system_pods.go:89] "metrics-server-569cc877fc-5msnp" [eb9cd6f7-caf5-4b18-b0d6-0f01add839ce] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:42:33.601908 1039263 system_pods.go:89] "storage-provisioner" [ecdab0df-406c-4f3c-b8fe-34a48b7f1e0a] Running
	I0729 14:42:33.601921 1039263 system_pods.go:126] duration metric: took 5.70985ms to wait for k8s-apps to be running ...
	I0729 14:42:33.601934 1039263 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 14:42:33.601994 1039263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:42:33.620869 1039263 system_svc.go:56] duration metric: took 18.921974ms WaitForService to wait for kubelet
	I0729 14:42:33.620907 1039263 kubeadm.go:582] duration metric: took 4m24.7894747s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:42:33.620939 1039263 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:42:33.623517 1039263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:42:33.623538 1039263 node_conditions.go:123] node cpu capacity is 2
	I0729 14:42:33.623562 1039263 node_conditions.go:105] duration metric: took 2.617272ms to run NodePressure ...
	I0729 14:42:33.623582 1039263 start.go:241] waiting for startup goroutines ...
	I0729 14:42:33.623591 1039263 start.go:246] waiting for cluster config update ...
	I0729 14:42:33.623601 1039263 start.go:255] writing updated cluster config ...
	I0729 14:42:33.623897 1039263 ssh_runner.go:195] Run: rm -f paused
	I0729 14:42:33.677961 1039263 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 14:42:33.679952 1039263 out.go:177] * Done! kubectl is now configured to use "embed-certs-668123" cluster and "default" namespace by default
	I0729 14:42:30.758335 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:30.771788 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:30.771860 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:30.807608 1039759 cri.go:89] found id: ""
	I0729 14:42:30.807633 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.807641 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:30.807647 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:30.807709 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:30.842361 1039759 cri.go:89] found id: ""
	I0729 14:42:30.842389 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.842397 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:30.842404 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:30.842474 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:30.879123 1039759 cri.go:89] found id: ""
	I0729 14:42:30.879149 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.879157 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:30.879162 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:30.879228 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:30.913042 1039759 cri.go:89] found id: ""
	I0729 14:42:30.913072 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.913084 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:30.913092 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:30.913162 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:30.949867 1039759 cri.go:89] found id: ""
	I0729 14:42:30.949900 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.949910 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:30.949919 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:30.949988 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:30.997468 1039759 cri.go:89] found id: ""
	I0729 14:42:30.997497 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.997509 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:30.997516 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:30.997606 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:31.039611 1039759 cri.go:89] found id: ""
	I0729 14:42:31.039643 1039759 logs.go:276] 0 containers: []
	W0729 14:42:31.039654 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:31.039662 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:31.039730 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:31.085802 1039759 cri.go:89] found id: ""
	I0729 14:42:31.085839 1039759 logs.go:276] 0 containers: []
	W0729 14:42:31.085851 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:31.085862 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:31.085890 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:31.155919 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:31.155941 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:31.155954 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:31.232795 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:31.232833 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:31.270647 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:31.270682 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:31.324648 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:31.324685 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:33.839801 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:33.853358 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:33.853417 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:33.674345 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:36.174468 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:34.707752 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:37.209918 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:33.889294 1039759 cri.go:89] found id: ""
	I0729 14:42:33.889323 1039759 logs.go:276] 0 containers: []
	W0729 14:42:33.889334 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:33.889342 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:33.889413 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:33.930106 1039759 cri.go:89] found id: ""
	I0729 14:42:33.930130 1039759 logs.go:276] 0 containers: []
	W0729 14:42:33.930142 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:33.930149 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:33.930211 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:33.973607 1039759 cri.go:89] found id: ""
	I0729 14:42:33.973634 1039759 logs.go:276] 0 containers: []
	W0729 14:42:33.973646 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:33.973654 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:33.973715 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:34.010103 1039759 cri.go:89] found id: ""
	I0729 14:42:34.010133 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.010142 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:34.010149 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:34.010209 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:34.044050 1039759 cri.go:89] found id: ""
	I0729 14:42:34.044080 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.044092 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:34.044099 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:34.044174 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:34.081222 1039759 cri.go:89] found id: ""
	I0729 14:42:34.081250 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.081260 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:34.081268 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:34.081360 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:34.115837 1039759 cri.go:89] found id: ""
	I0729 14:42:34.115878 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.115891 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:34.115899 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:34.115973 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:34.151086 1039759 cri.go:89] found id: ""
	I0729 14:42:34.151116 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.151126 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:34.151139 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:34.151156 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:34.164058 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:34.164087 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:34.238481 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:34.238503 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:34.238518 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:34.316236 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:34.316279 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:34.356281 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:34.356316 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:36.910374 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:36.924907 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:36.925008 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:36.960508 1039759 cri.go:89] found id: ""
	I0729 14:42:36.960535 1039759 logs.go:276] 0 containers: []
	W0729 14:42:36.960543 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:36.960550 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:36.960631 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:36.999840 1039759 cri.go:89] found id: ""
	I0729 14:42:36.999869 1039759 logs.go:276] 0 containers: []
	W0729 14:42:36.999881 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:36.999889 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:36.999960 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:37.032801 1039759 cri.go:89] found id: ""
	I0729 14:42:37.032832 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.032840 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:37.032847 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:37.032907 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:37.066359 1039759 cri.go:89] found id: ""
	I0729 14:42:37.066386 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.066394 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:37.066401 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:37.066454 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:37.103816 1039759 cri.go:89] found id: ""
	I0729 14:42:37.103844 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.103852 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:37.103859 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:37.103922 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:37.137135 1039759 cri.go:89] found id: ""
	I0729 14:42:37.137175 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.137186 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:37.137194 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:37.137267 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:37.170819 1039759 cri.go:89] found id: ""
	I0729 14:42:37.170851 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.170863 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:37.170871 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:37.170941 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:37.206427 1039759 cri.go:89] found id: ""
	I0729 14:42:37.206456 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.206467 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:37.206478 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:37.206492 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:37.287119 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:37.287160 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:37.331090 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:37.331119 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:37.392147 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:37.392189 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:37.406017 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:37.406047 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:37.471644 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:38.673603 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:40.674214 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:39.706915 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:41.201453 1039440 pod_ready.go:81] duration metric: took 4m0.000454399s for pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace to be "Ready" ...
	E0729 14:42:41.201488 1039440 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 14:42:41.201514 1039440 pod_ready.go:38] duration metric: took 4m13.052610312s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:42:41.201553 1039440 kubeadm.go:597] duration metric: took 4m22.712976139s to restartPrimaryControlPlane
	W0729 14:42:41.201639 1039440 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 14:42:41.201696 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:42:39.972835 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:39.985878 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:39.985945 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:40.020312 1039759 cri.go:89] found id: ""
	I0729 14:42:40.020349 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.020360 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:40.020368 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:40.020456 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:40.055688 1039759 cri.go:89] found id: ""
	I0729 14:42:40.055721 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.055732 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:40.055740 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:40.055799 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:40.090432 1039759 cri.go:89] found id: ""
	I0729 14:42:40.090463 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.090472 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:40.090478 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:40.090549 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:40.127794 1039759 cri.go:89] found id: ""
	I0729 14:42:40.127823 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.127832 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:40.127838 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:40.127894 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:40.162911 1039759 cri.go:89] found id: ""
	I0729 14:42:40.162944 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.162953 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:40.162959 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:40.163020 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:40.201578 1039759 cri.go:89] found id: ""
	I0729 14:42:40.201608 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.201619 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:40.201625 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:40.201684 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:40.247314 1039759 cri.go:89] found id: ""
	I0729 14:42:40.247340 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.247348 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:40.247363 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:40.247436 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:40.285393 1039759 cri.go:89] found id: ""
	I0729 14:42:40.285422 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.285431 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:40.285440 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:40.285458 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:40.299901 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:40.299933 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:40.372774 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:40.372802 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:40.372821 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:40.454392 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:40.454447 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:40.494641 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:40.494671 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:43.046060 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:43.058790 1039759 kubeadm.go:597] duration metric: took 4m3.37086398s to restartPrimaryControlPlane
	W0729 14:42:43.058888 1039759 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 14:42:43.058920 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:42:43.544647 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:42:43.560304 1039759 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:42:43.570229 1039759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:42:43.579922 1039759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:42:43.579946 1039759 kubeadm.go:157] found existing configuration files:
	
	I0729 14:42:43.580004 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:42:43.589520 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:42:43.589591 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:42:43.600286 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:42:43.611565 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:42:43.611629 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:42:43.623432 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:42:43.633289 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:42:43.633338 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:42:43.643410 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:42:43.653723 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:42:43.653816 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:42:43.663840 1039759 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:42:43.735243 1039759 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 14:42:43.735314 1039759 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:42:43.904148 1039759 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:42:43.904310 1039759 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:42:43.904480 1039759 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 14:42:44.101401 1039759 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:42:44.103392 1039759 out.go:204]   - Generating certificates and keys ...
	I0729 14:42:44.103499 1039759 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:42:44.103580 1039759 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:42:44.103693 1039759 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:42:44.103829 1039759 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:42:44.103944 1039759 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:42:44.104054 1039759 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:42:44.104146 1039759 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:42:44.104360 1039759 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:42:44.104599 1039759 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:42:44.105264 1039759 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:42:44.105363 1039759 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:42:44.105461 1039759 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:42:44.426107 1039759 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:42:44.593004 1039759 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:42:44.845387 1039759 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:42:44.934634 1039759 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:42:44.959808 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:42:44.961918 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:42:44.961990 1039759 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:42:45.117986 1039759 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:42:42.678218 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:45.175453 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:45.119775 1039759 out.go:204]   - Booting up control plane ...
	I0729 14:42:45.119913 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:42:45.121333 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:42:45.123001 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:42:45.123783 1039759 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:42:45.126031 1039759 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 14:42:47.673678 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:49.674212 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:52.173086 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:54.173797 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:56.178948 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:58.674432 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:00.675207 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:03.173621 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:05.175460 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:07.674421 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:09.674478 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:12.882329 1039440 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.680602745s)
	I0729 14:43:12.882426 1039440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:43:12.900267 1039440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:43:12.910750 1039440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:43:12.921172 1039440 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:43:12.921194 1039440 kubeadm.go:157] found existing configuration files:
	
	I0729 14:43:12.921244 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 14:43:12.931186 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:43:12.931243 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:43:12.940800 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 14:43:12.949875 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:43:12.949929 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:43:12.959555 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 14:43:12.968817 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:43:12.968871 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:43:12.978560 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 14:43:12.987657 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:43:12.987700 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:43:12.997142 1039440 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:43:13.057245 1039440 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 14:43:13.057405 1039440 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:43:13.205227 1039440 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:43:13.205381 1039440 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:43:13.205541 1039440 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 14:43:13.404885 1039440 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:43:13.407054 1039440 out.go:204]   - Generating certificates and keys ...
	I0729 14:43:13.407148 1039440 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:43:13.407232 1039440 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:43:13.407329 1039440 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:43:13.407411 1039440 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:43:13.407509 1039440 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:43:13.407598 1039440 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:43:13.407688 1039440 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:43:13.407774 1039440 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:43:13.407889 1039440 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:43:13.408006 1039440 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:43:13.408071 1039440 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:43:13.408177 1039440 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:43:13.563569 1039440 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:43:14.001138 1039440 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 14:43:14.091368 1039440 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:43:14.238732 1039440 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:43:14.344460 1039440 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:43:14.346386 1039440 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:43:14.349309 1039440 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:43:12.174022 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:14.673166 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:14.351183 1039440 out.go:204]   - Booting up control plane ...
	I0729 14:43:14.351293 1039440 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:43:14.351374 1039440 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:43:14.351671 1039440 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:43:14.375878 1039440 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:43:14.377114 1039440 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:43:14.377198 1039440 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:43:14.528561 1039440 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 14:43:14.528665 1039440 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 14:43:15.030447 1039440 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.044001ms
	I0729 14:43:15.030591 1039440 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 14:43:20.033357 1039440 kubeadm.go:310] [api-check] The API server is healthy after 5.002708747s
	I0729 14:43:20.055871 1039440 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 14:43:20.069020 1039440 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 14:43:20.108465 1039440 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 14:43:20.108664 1039440 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-751306 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 14:43:20.124596 1039440 kubeadm.go:310] [bootstrap-token] Using token: vqqt7g.hayxn6bly3sjo08s
	I0729 14:43:20.125995 1039440 out.go:204]   - Configuring RBAC rules ...
	I0729 14:43:20.126124 1039440 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 14:43:20.138826 1039440 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 14:43:20.145976 1039440 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 14:43:20.149166 1039440 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 14:43:20.152875 1039440 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 14:43:20.156268 1039440 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 14:43:20.446117 1039440 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 14:43:20.900251 1039440 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 14:43:21.446105 1039440 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 14:43:21.446920 1039440 kubeadm.go:310] 
	I0729 14:43:21.446984 1039440 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 14:43:21.446992 1039440 kubeadm.go:310] 
	I0729 14:43:21.447057 1039440 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 14:43:21.447063 1039440 kubeadm.go:310] 
	I0729 14:43:21.447084 1039440 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 14:43:21.447133 1039440 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 14:43:21.447176 1039440 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 14:43:21.447182 1039440 kubeadm.go:310] 
	I0729 14:43:21.447233 1039440 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 14:43:21.447242 1039440 kubeadm.go:310] 
	I0729 14:43:21.447310 1039440 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 14:43:21.447334 1039440 kubeadm.go:310] 
	I0729 14:43:21.447408 1039440 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 14:43:21.447515 1039440 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 14:43:21.447574 1039440 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 14:43:21.447582 1039440 kubeadm.go:310] 
	I0729 14:43:21.447652 1039440 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 14:43:21.447722 1039440 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 14:43:21.447728 1039440 kubeadm.go:310] 
	I0729 14:43:21.447799 1039440 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token vqqt7g.hayxn6bly3sjo08s \
	I0729 14:43:21.447903 1039440 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 \
	I0729 14:43:21.447931 1039440 kubeadm.go:310] 	--control-plane 
	I0729 14:43:21.447935 1039440 kubeadm.go:310] 
	I0729 14:43:21.448017 1039440 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 14:43:21.448025 1039440 kubeadm.go:310] 
	I0729 14:43:21.448115 1039440 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token vqqt7g.hayxn6bly3sjo08s \
	I0729 14:43:21.448239 1039440 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 
	I0729 14:43:21.449071 1039440 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:43:21.449117 1039440 cni.go:84] Creating CNI manager for ""
	I0729 14:43:21.449134 1039440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:43:21.450744 1039440 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:43:16.674887 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:19.175478 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:21.452012 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:43:21.464232 1039440 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:43:21.486786 1039440 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 14:43:21.486890 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:21.486887 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-751306 minikube.k8s.io/updated_at=2024_07_29T14_43_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411 minikube.k8s.io/name=default-k8s-diff-port-751306 minikube.k8s.io/primary=true
	I0729 14:43:21.689413 1039440 ops.go:34] apiserver oom_adj: -16
	I0729 14:43:21.697342 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:22.198351 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:21.673361 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:23.674189 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:26.173782 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:22.698043 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:23.198259 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:23.697640 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:24.198325 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:24.697702 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:25.198216 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:25.697625 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:26.197978 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:26.698039 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:27.197794 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:25.126835 1039759 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 14:43:25.127033 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:43:25.127306 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:43:28.174036 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:29.667306 1038758 pod_ready.go:81] duration metric: took 4m0.000473541s for pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace to be "Ready" ...
	E0729 14:43:29.667341 1038758 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 14:43:29.667369 1038758 pod_ready.go:38] duration metric: took 4m13.916299366s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:43:29.667407 1038758 kubeadm.go:597] duration metric: took 4m21.57875039s to restartPrimaryControlPlane
	W0729 14:43:29.667481 1038758 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 14:43:29.667513 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:43:27.698036 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:28.197941 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:28.697839 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:29.197525 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:29.698141 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:30.197670 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:30.697615 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:31.197999 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:31.697648 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:32.197647 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:30.127504 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:43:30.127777 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:43:32.697837 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:33.197692 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:33.697431 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:34.198048 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:34.698439 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:34.802320 1039440 kubeadm.go:1113] duration metric: took 13.31552277s to wait for elevateKubeSystemPrivileges
	I0729 14:43:34.802367 1039440 kubeadm.go:394] duration metric: took 5m16.369033556s to StartCluster
	I0729 14:43:34.802391 1039440 settings.go:142] acquiring lock: {Name:mke61e73d7bb1a5bd9c2f4c9e9bba0a07b199ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:43:34.802488 1039440 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:43:34.804740 1039440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:43:34.805049 1039440 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.233 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:43:34.805148 1039440 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 14:43:34.805251 1039440 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-751306"
	I0729 14:43:34.805262 1039440 config.go:182] Loaded profile config "default-k8s-diff-port-751306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:43:34.805269 1039440 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-751306"
	I0729 14:43:34.805313 1039440 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-751306"
	I0729 14:43:34.805294 1039440 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-751306"
	W0729 14:43:34.805341 1039440 addons.go:243] addon storage-provisioner should already be in state true
	I0729 14:43:34.805358 1039440 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-751306"
	W0729 14:43:34.805369 1039440 addons.go:243] addon metrics-server should already be in state true
	I0729 14:43:34.805396 1039440 host.go:66] Checking if "default-k8s-diff-port-751306" exists ...
	I0729 14:43:34.805325 1039440 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-751306"
	I0729 14:43:34.805396 1039440 host.go:66] Checking if "default-k8s-diff-port-751306" exists ...
	I0729 14:43:34.805838 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.805869 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.805904 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.805928 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.805968 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.806026 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.806625 1039440 out.go:177] * Verifying Kubernetes components...
	I0729 14:43:34.807999 1039440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:43:34.823091 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39133
	I0729 14:43:34.823103 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35809
	I0729 14:43:34.823532 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.823556 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.824084 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.824111 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.824372 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.824399 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.824427 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.824891 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.825049 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38325
	I0729 14:43:34.825140 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.825191 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.825210 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:43:34.825415 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.825927 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.825945 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.826314 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.826903 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.826939 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.829361 1039440 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-751306"
	W0729 14:43:34.829386 1039440 addons.go:243] addon default-storageclass should already be in state true
	I0729 14:43:34.829417 1039440 host.go:66] Checking if "default-k8s-diff-port-751306" exists ...
	I0729 14:43:34.829785 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.829832 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.841752 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44091
	I0729 14:43:34.842232 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.842938 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.842965 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.843370 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38151
	I0729 14:43:34.843397 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.843713 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:43:34.843818 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.844223 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.844247 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.844615 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.844805 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:43:34.846424 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:43:34.846619 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:43:34.848531 1039440 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 14:43:34.848918 1039440 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:43:34.849006 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35785
	I0729 14:43:34.849421 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.849852 1039440 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 14:43:34.849870 1039440 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 14:43:34.849888 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:43:34.850037 1039440 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:43:34.850053 1039440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 14:43:34.850069 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:43:34.850233 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.850251 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.850659 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.851665 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.851781 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.853937 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.854441 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.854518 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:43:34.854540 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.854589 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:43:34.854779 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:43:34.855035 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:43:34.855098 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:43:34.855114 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.855169 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:43:34.855465 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:43:34.855658 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:43:34.855828 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:43:34.856191 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:43:34.869648 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38917
	I0729 14:43:34.870131 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.870600 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.870618 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.871134 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.871334 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:43:34.873088 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:43:34.873340 1039440 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 14:43:34.873353 1039440 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 14:43:34.873369 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:43:34.876289 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.876751 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:43:34.876765 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.876952 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:43:34.877132 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:43:34.877267 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:43:34.877375 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:43:35.022897 1039440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:43:35.044537 1039440 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-751306" to be "Ready" ...
	I0729 14:43:35.057697 1039440 node_ready.go:49] node "default-k8s-diff-port-751306" has status "Ready":"True"
	I0729 14:43:35.057729 1039440 node_ready.go:38] duration metric: took 13.149458ms for node "default-k8s-diff-port-751306" to be "Ready" ...
	I0729 14:43:35.057744 1039440 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:43:35.073050 1039440 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7qhqh" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:35.150661 1039440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 14:43:35.170721 1039440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:43:35.228871 1039440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 14:43:35.228903 1039440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 14:43:35.276845 1039440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 14:43:35.276880 1039440 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 14:43:35.335623 1039440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:43:35.335656 1039440 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 14:43:35.407804 1039440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:43:35.446540 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.446567 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.446927 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:35.446959 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.446972 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:35.446985 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.446991 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.447286 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.447307 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:35.454199 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.454216 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.454476 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.454495 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:35.824592 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.824615 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.825058 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:35.825441 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.825525 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:35.825567 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.825576 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.827444 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.827454 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:35.827465 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:36.331175 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:36.331202 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:36.331575 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:36.331597 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:36.331607 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:36.331616 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:36.331623 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:36.331923 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:36.331961 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:36.331986 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:36.332003 1039440 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-751306"
	I0729 14:43:36.333995 1039440 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 14:43:36.335441 1039440 addons.go:510] duration metric: took 1.53029708s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 14:43:37.081992 1039440 pod_ready.go:92] pod "coredns-7db6d8ff4d-7qhqh" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.082019 1039440 pod_ready.go:81] duration metric: took 2.008931409s for pod "coredns-7db6d8ff4d-7qhqh" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.082031 1039440 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zxmwx" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.086173 1039440 pod_ready.go:92] pod "coredns-7db6d8ff4d-zxmwx" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.086194 1039440 pod_ready.go:81] duration metric: took 4.154163ms for pod "coredns-7db6d8ff4d-zxmwx" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.086203 1039440 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.090617 1039440 pod_ready.go:92] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.090636 1039440 pod_ready.go:81] duration metric: took 4.42625ms for pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.090647 1039440 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.094929 1039440 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.094950 1039440 pod_ready.go:81] duration metric: took 4.296245ms for pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.094962 1039440 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.099462 1039440 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.099483 1039440 pod_ready.go:81] duration metric: took 4.513354ms for pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.099495 1039440 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tqtjx" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.478252 1039440 pod_ready.go:92] pod "kube-proxy-tqtjx" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.478281 1039440 pod_ready.go:81] duration metric: took 378.778206ms for pod "kube-proxy-tqtjx" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.478295 1039440 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.878655 1039440 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.878678 1039440 pod_ready.go:81] duration metric: took 400.374407ms for pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.878686 1039440 pod_ready.go:38] duration metric: took 2.820929833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:43:37.878702 1039440 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:43:37.878752 1039440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:43:37.894699 1039440 api_server.go:72] duration metric: took 3.08960429s to wait for apiserver process to appear ...
	I0729 14:43:37.894730 1039440 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:43:37.894767 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:43:37.899710 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 200:
	ok
	I0729 14:43:37.900733 1039440 api_server.go:141] control plane version: v1.30.3
	I0729 14:43:37.900757 1039440 api_server.go:131] duration metric: took 6.019707ms to wait for apiserver health ...
	I0729 14:43:37.900765 1039440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:43:38.083157 1039440 system_pods.go:59] 9 kube-system pods found
	I0729 14:43:38.083197 1039440 system_pods.go:61] "coredns-7db6d8ff4d-7qhqh" [88941d43-c67d-4190-896c-edfc4c96b9a8] Running
	I0729 14:43:38.083204 1039440 system_pods.go:61] "coredns-7db6d8ff4d-zxmwx" [13b78c9b-97dc-4313-92d1-76fab481b276] Running
	I0729 14:43:38.083210 1039440 system_pods.go:61] "etcd-default-k8s-diff-port-751306" [11d5216e-a3e3-4ac8-9b00-1b1b04bb1c3e] Running
	I0729 14:43:38.083215 1039440 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-751306" [f9f539b1-374e-4214-b4ac-d6bcb60ca022] Running
	I0729 14:43:38.083221 1039440 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-751306" [07af9a19-2d14-4727-b7b0-ad2f297c1d1a] Running
	I0729 14:43:38.083226 1039440 system_pods.go:61] "kube-proxy-tqtjx" [bd100e13-d714-4ddb-ba43-44be43035b3f] Running
	I0729 14:43:38.083231 1039440 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-751306" [03603694-d75d-4073-8ce9-0ed9bbbe150a] Running
	I0729 14:43:38.083240 1039440 system_pods.go:61] "metrics-server-569cc877fc-z9wg5" [f022dfec-8e97-4679-a7dd-739c9231af82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:43:38.083246 1039440 system_pods.go:61] "storage-provisioner" [a8bf282a-27e8-43f9-a2ac-af6000a4decc] Running
	I0729 14:43:38.083255 1039440 system_pods.go:74] duration metric: took 182.484884ms to wait for pod list to return data ...
	I0729 14:43:38.083269 1039440 default_sa.go:34] waiting for default service account to be created ...
	I0729 14:43:38.277387 1039440 default_sa.go:45] found service account: "default"
	I0729 14:43:38.277418 1039440 default_sa.go:55] duration metric: took 194.142035ms for default service account to be created ...
	I0729 14:43:38.277429 1039440 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 14:43:38.481158 1039440 system_pods.go:86] 9 kube-system pods found
	I0729 14:43:38.481194 1039440 system_pods.go:89] "coredns-7db6d8ff4d-7qhqh" [88941d43-c67d-4190-896c-edfc4c96b9a8] Running
	I0729 14:43:38.481202 1039440 system_pods.go:89] "coredns-7db6d8ff4d-zxmwx" [13b78c9b-97dc-4313-92d1-76fab481b276] Running
	I0729 14:43:38.481210 1039440 system_pods.go:89] "etcd-default-k8s-diff-port-751306" [11d5216e-a3e3-4ac8-9b00-1b1b04bb1c3e] Running
	I0729 14:43:38.481217 1039440 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-751306" [f9f539b1-374e-4214-b4ac-d6bcb60ca022] Running
	I0729 14:43:38.481225 1039440 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-751306" [07af9a19-2d14-4727-b7b0-ad2f297c1d1a] Running
	I0729 14:43:38.481230 1039440 system_pods.go:89] "kube-proxy-tqtjx" [bd100e13-d714-4ddb-ba43-44be43035b3f] Running
	I0729 14:43:38.481236 1039440 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-751306" [03603694-d75d-4073-8ce9-0ed9bbbe150a] Running
	I0729 14:43:38.481248 1039440 system_pods.go:89] "metrics-server-569cc877fc-z9wg5" [f022dfec-8e97-4679-a7dd-739c9231af82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:43:38.481255 1039440 system_pods.go:89] "storage-provisioner" [a8bf282a-27e8-43f9-a2ac-af6000a4decc] Running
	I0729 14:43:38.481267 1039440 system_pods.go:126] duration metric: took 203.830126ms to wait for k8s-apps to be running ...
	I0729 14:43:38.481280 1039440 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 14:43:38.481329 1039440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:43:38.496175 1039440 system_svc.go:56] duration metric: took 14.88714ms WaitForService to wait for kubelet
	I0729 14:43:38.496209 1039440 kubeadm.go:582] duration metric: took 3.691120463s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:43:38.496237 1039440 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:43:38.677820 1039440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:43:38.677847 1039440 node_conditions.go:123] node cpu capacity is 2
	I0729 14:43:38.677859 1039440 node_conditions.go:105] duration metric: took 181.616437ms to run NodePressure ...
	I0729 14:43:38.677874 1039440 start.go:241] waiting for startup goroutines ...
	I0729 14:43:38.677882 1039440 start.go:246] waiting for cluster config update ...
	I0729 14:43:38.677894 1039440 start.go:255] writing updated cluster config ...
	I0729 14:43:38.678166 1039440 ssh_runner.go:195] Run: rm -f paused
	I0729 14:43:38.728616 1039440 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 14:43:38.730494 1039440 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-751306" cluster and "default" namespace by default
	I0729 14:43:40.128244 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:43:40.128447 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:43:55.945251 1038758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.277690166s)
	I0729 14:43:55.945335 1038758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:43:55.960870 1038758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:43:55.971175 1038758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:43:55.981424 1038758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:43:55.981456 1038758 kubeadm.go:157] found existing configuration files:
	
	I0729 14:43:55.981512 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:43:55.992098 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:43:55.992165 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:43:56.002242 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:43:56.011416 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:43:56.011486 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:43:56.020848 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:43:56.030219 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:43:56.030280 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:43:56.039957 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:43:56.049607 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:43:56.049670 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:43:56.059413 1038758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:43:56.109453 1038758 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 14:43:56.109563 1038758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:43:56.230876 1038758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:43:56.231018 1038758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:43:56.231126 1038758 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 14:43:56.244355 1038758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:43:56.246461 1038758 out.go:204]   - Generating certificates and keys ...
	I0729 14:43:56.246573 1038758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:43:56.246666 1038758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:43:56.246755 1038758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:43:56.246843 1038758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:43:56.246964 1038758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:43:56.247169 1038758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:43:56.247267 1038758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:43:56.247365 1038758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:43:56.247473 1038758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:43:56.247588 1038758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:43:56.247646 1038758 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:43:56.247718 1038758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:43:56.593641 1038758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:43:56.714510 1038758 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 14:43:56.862780 1038758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:43:57.010367 1038758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:43:57.108324 1038758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:43:57.109028 1038758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:43:57.111425 1038758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:43:57.113088 1038758 out.go:204]   - Booting up control plane ...
	I0729 14:43:57.113217 1038758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:43:57.113336 1038758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:43:57.113501 1038758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:43:57.135168 1038758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:43:57.141915 1038758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:43:57.142022 1038758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:43:57.269947 1038758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 14:43:57.270056 1038758 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 14:43:57.772110 1038758 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.03343ms
	I0729 14:43:57.772229 1038758 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 14:44:02.773898 1038758 kubeadm.go:310] [api-check] The API server is healthy after 5.00168383s
	I0729 14:44:02.788629 1038758 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 14:44:02.805813 1038758 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 14:44:02.831687 1038758 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 14:44:02.831963 1038758 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-603534 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 14:44:02.842427 1038758 kubeadm.go:310] [bootstrap-token] Using token: hg3j3v.551bb9ju0g9ic9e6
	I0729 14:44:00.129004 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:44:00.129267 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:44:02.844018 1038758 out.go:204]   - Configuring RBAC rules ...
	I0729 14:44:02.844160 1038758 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 14:44:02.851693 1038758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 14:44:02.859496 1038758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 14:44:02.863556 1038758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 14:44:02.866896 1038758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 14:44:02.871375 1038758 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 14:44:03.181687 1038758 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 14:44:03.618445 1038758 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 14:44:04.184562 1038758 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 14:44:04.185548 1038758 kubeadm.go:310] 
	I0729 14:44:04.185655 1038758 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 14:44:04.185689 1038758 kubeadm.go:310] 
	I0729 14:44:04.185788 1038758 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 14:44:04.185801 1038758 kubeadm.go:310] 
	I0729 14:44:04.185825 1038758 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 14:44:04.185906 1038758 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 14:44:04.185983 1038758 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 14:44:04.185992 1038758 kubeadm.go:310] 
	I0729 14:44:04.186079 1038758 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 14:44:04.186090 1038758 kubeadm.go:310] 
	I0729 14:44:04.186155 1038758 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 14:44:04.186165 1038758 kubeadm.go:310] 
	I0729 14:44:04.186231 1038758 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 14:44:04.186337 1038758 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 14:44:04.186431 1038758 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 14:44:04.186441 1038758 kubeadm.go:310] 
	I0729 14:44:04.186575 1038758 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 14:44:04.186679 1038758 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 14:44:04.186689 1038758 kubeadm.go:310] 
	I0729 14:44:04.186810 1038758 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hg3j3v.551bb9ju0g9ic9e6 \
	I0729 14:44:04.186944 1038758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 \
	I0729 14:44:04.186974 1038758 kubeadm.go:310] 	--control-plane 
	I0729 14:44:04.186984 1038758 kubeadm.go:310] 
	I0729 14:44:04.187102 1038758 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 14:44:04.187111 1038758 kubeadm.go:310] 
	I0729 14:44:04.187224 1038758 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hg3j3v.551bb9ju0g9ic9e6 \
	I0729 14:44:04.187375 1038758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 
	I0729 14:44:04.188377 1038758 kubeadm.go:310] W0729 14:43:56.090027    2906 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 14:44:04.188711 1038758 kubeadm.go:310] W0729 14:43:56.090887    2906 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 14:44:04.188834 1038758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:44:04.188852 1038758 cni.go:84] Creating CNI manager for ""
	I0729 14:44:04.188863 1038758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:44:04.190535 1038758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:44:04.191948 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:44:04.203414 1038758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:44:04.223025 1038758 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 14:44:04.223114 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:04.223132 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-603534 minikube.k8s.io/updated_at=2024_07_29T14_44_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411 minikube.k8s.io/name=no-preload-603534 minikube.k8s.io/primary=true
	I0729 14:44:04.240353 1038758 ops.go:34] apiserver oom_adj: -16
	I0729 14:44:04.442077 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:04.942458 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:05.442843 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:05.942138 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:06.442232 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:06.942611 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:07.442939 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:07.942661 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:08.443044 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:08.522590 1038758 kubeadm.go:1113] duration metric: took 4.299548803s to wait for elevateKubeSystemPrivileges
	I0729 14:44:08.522633 1038758 kubeadm.go:394] duration metric: took 5m0.491164642s to StartCluster
	I0729 14:44:08.522657 1038758 settings.go:142] acquiring lock: {Name:mke61e73d7bb1a5bd9c2f4c9e9bba0a07b199ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:44:08.522755 1038758 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:44:08.524573 1038758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:44:08.524893 1038758 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:44:08.524999 1038758 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 14:44:08.525112 1038758 addons.go:69] Setting storage-provisioner=true in profile "no-preload-603534"
	I0729 14:44:08.525150 1038758 addons.go:234] Setting addon storage-provisioner=true in "no-preload-603534"
	I0729 14:44:08.525146 1038758 addons.go:69] Setting default-storageclass=true in profile "no-preload-603534"
	I0729 14:44:08.525155 1038758 config.go:182] Loaded profile config "no-preload-603534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 14:44:08.525167 1038758 addons.go:69] Setting metrics-server=true in profile "no-preload-603534"
	I0729 14:44:08.525182 1038758 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-603534"
	W0729 14:44:08.525162 1038758 addons.go:243] addon storage-provisioner should already be in state true
	I0729 14:44:08.525229 1038758 host.go:66] Checking if "no-preload-603534" exists ...
	I0729 14:44:08.525185 1038758 addons.go:234] Setting addon metrics-server=true in "no-preload-603534"
	W0729 14:44:08.525264 1038758 addons.go:243] addon metrics-server should already be in state true
	I0729 14:44:08.525294 1038758 host.go:66] Checking if "no-preload-603534" exists ...
	I0729 14:44:08.525510 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.525553 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.525652 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.525668 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.525688 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.525715 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.526581 1038758 out.go:177] * Verifying Kubernetes components...
	I0729 14:44:08.527919 1038758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:44:08.541874 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43521
	I0729 14:44:08.542126 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34697
	I0729 14:44:08.542251 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35699
	I0729 14:44:08.542397 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.542505 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.542664 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.542948 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.542969 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.543075 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.543090 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.543115 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.543127 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.543323 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.543546 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.543551 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.543758 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.543779 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.544014 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.544035 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.544149 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:44:08.548026 1038758 addons.go:234] Setting addon default-storageclass=true in "no-preload-603534"
	W0729 14:44:08.548048 1038758 addons.go:243] addon default-storageclass should already be in state true
	I0729 14:44:08.548079 1038758 host.go:66] Checking if "no-preload-603534" exists ...
	I0729 14:44:08.548457 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.548478 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.559699 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36211
	I0729 14:44:08.560297 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.560916 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.560953 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.561332 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.561519 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:44:08.563422 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:44:08.564073 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42429
	I0729 14:44:08.564524 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.565011 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.565038 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.565427 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.565592 1038758 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 14:44:08.565752 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:44:08.566901 1038758 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 14:44:08.566921 1038758 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 14:44:08.566941 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:44:08.567688 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:44:08.568067 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34485
	I0729 14:44:08.568443 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.569019 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.569040 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.569462 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.569583 1038758 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:44:08.570038 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.570074 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.571187 1038758 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:44:08.571204 1038758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 14:44:08.571223 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:44:08.571595 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.572203 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:44:08.572247 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.572506 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:44:08.572704 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:44:08.572893 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:44:08.573100 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:44:08.574551 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.574900 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:44:08.574919 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.575074 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:44:08.575286 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:44:08.575427 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:44:08.575551 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:44:08.585902 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40045
	I0729 14:44:08.586319 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.586778 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.586803 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.587135 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.587357 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:44:08.588606 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:44:08.588827 1038758 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 14:44:08.588844 1038758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 14:44:08.588861 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:44:08.591169 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.591434 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:44:08.591466 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.591600 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:44:08.591766 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:44:08.591873 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:44:08.592103 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:44:08.752015 1038758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:44:08.775498 1038758 node_ready.go:35] waiting up to 6m0s for node "no-preload-603534" to be "Ready" ...
	I0729 14:44:08.788547 1038758 node_ready.go:49] node "no-preload-603534" has status "Ready":"True"
	I0729 14:44:08.788572 1038758 node_ready.go:38] duration metric: took 13.040411ms for node "no-preload-603534" to be "Ready" ...
	I0729 14:44:08.788582 1038758 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:44:08.793475 1038758 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:08.861468 1038758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:44:08.869542 1038758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 14:44:08.869567 1038758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 14:44:08.898398 1038758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 14:44:08.911120 1038758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 14:44:08.911148 1038758 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 14:44:08.931151 1038758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:44:08.931179 1038758 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 14:44:08.976093 1038758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:44:09.449857 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.449885 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.449863 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.449958 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.450343 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.450354 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.450361 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.450373 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.450374 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.450389 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.450442 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.450455 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.450476 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.450487 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.450620 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.450635 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.450637 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.450779 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.450799 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.493934 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.493959 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.494303 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.494320 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.494342 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.706038 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.706072 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.706366 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.706382 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.706391 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.706398 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.707956 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.707958 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.707986 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.708015 1038758 addons.go:475] Verifying addon metrics-server=true in "no-preload-603534"
	I0729 14:44:09.709729 1038758 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 14:44:09.711283 1038758 addons.go:510] duration metric: took 1.186289164s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 14:44:10.807976 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"False"
	I0729 14:44:13.300325 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"False"
	I0729 14:44:15.800886 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"False"
	I0729 14:44:18.300042 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"False"
	I0729 14:44:18.800080 1038758 pod_ready.go:92] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.800111 1038758 pod_ready.go:81] duration metric: took 10.006613711s for pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.800124 1038758 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-vn8z4" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.804949 1038758 pod_ready.go:92] pod "coredns-5cfdc65f69-vn8z4" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.804974 1038758 pod_ready.go:81] duration metric: took 4.840477ms for pod "coredns-5cfdc65f69-vn8z4" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.804985 1038758 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.810160 1038758 pod_ready.go:92] pod "etcd-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.810176 1038758 pod_ready.go:81] duration metric: took 5.184516ms for pod "etcd-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.810185 1038758 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.814785 1038758 pod_ready.go:92] pod "kube-apiserver-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.814807 1038758 pod_ready.go:81] duration metric: took 4.615516ms for pod "kube-apiserver-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.814819 1038758 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.819023 1038758 pod_ready.go:92] pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.819044 1038758 pod_ready.go:81] duration metric: took 4.215656ms for pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.819056 1038758 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7mr4z" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:19.198226 1038758 pod_ready.go:92] pod "kube-proxy-7mr4z" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:19.198252 1038758 pod_ready.go:81] duration metric: took 379.18928ms for pod "kube-proxy-7mr4z" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:19.198265 1038758 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:19.598783 1038758 pod_ready.go:92] pod "kube-scheduler-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:19.598824 1038758 pod_ready.go:81] duration metric: took 400.55255ms for pod "kube-scheduler-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:19.598835 1038758 pod_ready.go:38] duration metric: took 10.810240266s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:44:19.598865 1038758 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:44:19.598931 1038758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:44:19.615165 1038758 api_server.go:72] duration metric: took 11.090236578s to wait for apiserver process to appear ...
	I0729 14:44:19.615190 1038758 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:44:19.615211 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:44:19.619574 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0729 14:44:19.620586 1038758 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 14:44:19.620610 1038758 api_server.go:131] duration metric: took 5.412598ms to wait for apiserver health ...
	I0729 14:44:19.620620 1038758 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:44:19.802376 1038758 system_pods.go:59] 9 kube-system pods found
	I0729 14:44:19.802408 1038758 system_pods.go:61] "coredns-5cfdc65f69-m6q8r" [b3a0c38d-1587-4fdf-b2e6-58d364ca400b] Running
	I0729 14:44:19.802415 1038758 system_pods.go:61] "coredns-5cfdc65f69-vn8z4" [4654aadf-7870-46b6-96e6-5948239fbe22] Running
	I0729 14:44:19.802420 1038758 system_pods.go:61] "etcd-no-preload-603534" [01737765-56ad-4305-aa98-d531dd1fadb4] Running
	I0729 14:44:19.802429 1038758 system_pods.go:61] "kube-apiserver-no-preload-603534" [141fffbe-df4b-4de1-9d78-f1acf0b837a6] Running
	I0729 14:44:19.802434 1038758 system_pods.go:61] "kube-controller-manager-no-preload-603534" [39c980ec-50f7-4af1-b931-1a446775c934] Running
	I0729 14:44:19.802441 1038758 system_pods.go:61] "kube-proxy-7mr4z" [17de173c-2b95-4b35-a9d7-b38f065270cb] Running
	I0729 14:44:19.802446 1038758 system_pods.go:61] "kube-scheduler-no-preload-603534" [8d896d6c-43b9-4bc8-9994-41b0bd4b636d] Running
	I0729 14:44:19.802454 1038758 system_pods.go:61] "metrics-server-78fcd8795b-852x6" [637fea9b-2924-4593-a4e2-99a33ab613d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:44:19.802470 1038758 system_pods.go:61] "storage-provisioner" [7336eb38-d53d-4456-8367-cf843abe5cb5] Running
	I0729 14:44:19.802482 1038758 system_pods.go:74] duration metric: took 181.853357ms to wait for pod list to return data ...
	I0729 14:44:19.802491 1038758 default_sa.go:34] waiting for default service account to be created ...
	I0729 14:44:19.998312 1038758 default_sa.go:45] found service account: "default"
	I0729 14:44:19.998348 1038758 default_sa.go:55] duration metric: took 195.845187ms for default service account to be created ...
	I0729 14:44:19.998361 1038758 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 14:44:20.201742 1038758 system_pods.go:86] 9 kube-system pods found
	I0729 14:44:20.201778 1038758 system_pods.go:89] "coredns-5cfdc65f69-m6q8r" [b3a0c38d-1587-4fdf-b2e6-58d364ca400b] Running
	I0729 14:44:20.201787 1038758 system_pods.go:89] "coredns-5cfdc65f69-vn8z4" [4654aadf-7870-46b6-96e6-5948239fbe22] Running
	I0729 14:44:20.201793 1038758 system_pods.go:89] "etcd-no-preload-603534" [01737765-56ad-4305-aa98-d531dd1fadb4] Running
	I0729 14:44:20.201800 1038758 system_pods.go:89] "kube-apiserver-no-preload-603534" [141fffbe-df4b-4de1-9d78-f1acf0b837a6] Running
	I0729 14:44:20.201807 1038758 system_pods.go:89] "kube-controller-manager-no-preload-603534" [39c980ec-50f7-4af1-b931-1a446775c934] Running
	I0729 14:44:20.201812 1038758 system_pods.go:89] "kube-proxy-7mr4z" [17de173c-2b95-4b35-a9d7-b38f065270cb] Running
	I0729 14:44:20.201818 1038758 system_pods.go:89] "kube-scheduler-no-preload-603534" [8d896d6c-43b9-4bc8-9994-41b0bd4b636d] Running
	I0729 14:44:20.201826 1038758 system_pods.go:89] "metrics-server-78fcd8795b-852x6" [637fea9b-2924-4593-a4e2-99a33ab613d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:44:20.201835 1038758 system_pods.go:89] "storage-provisioner" [7336eb38-d53d-4456-8367-cf843abe5cb5] Running
	I0729 14:44:20.201850 1038758 system_pods.go:126] duration metric: took 203.481528ms to wait for k8s-apps to be running ...
	I0729 14:44:20.201860 1038758 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 14:44:20.201914 1038758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:44:20.217416 1038758 system_svc.go:56] duration metric: took 15.543768ms WaitForService to wait for kubelet
	I0729 14:44:20.217445 1038758 kubeadm.go:582] duration metric: took 11.692521258s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:44:20.217464 1038758 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:44:20.398667 1038758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:44:20.398696 1038758 node_conditions.go:123] node cpu capacity is 2
	I0729 14:44:20.398708 1038758 node_conditions.go:105] duration metric: took 181.238886ms to run NodePressure ...
	I0729 14:44:20.398720 1038758 start.go:241] waiting for startup goroutines ...
	I0729 14:44:20.398727 1038758 start.go:246] waiting for cluster config update ...
	I0729 14:44:20.398738 1038758 start.go:255] writing updated cluster config ...
	I0729 14:44:20.399014 1038758 ssh_runner.go:195] Run: rm -f paused
	I0729 14:44:20.452187 1038758 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 14:44:20.454434 1038758 out.go:177] * Done! kubectl is now configured to use "no-preload-603534" cluster and "default" namespace by default
	I0729 14:44:40.130597 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:44:40.130831 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:44:40.130848 1039759 kubeadm.go:310] 
	I0729 14:44:40.130903 1039759 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 14:44:40.130956 1039759 kubeadm.go:310] 		timed out waiting for the condition
	I0729 14:44:40.130966 1039759 kubeadm.go:310] 
	I0729 14:44:40.131032 1039759 kubeadm.go:310] 	This error is likely caused by:
	I0729 14:44:40.131110 1039759 kubeadm.go:310] 		- The kubelet is not running
	I0729 14:44:40.131256 1039759 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 14:44:40.131270 1039759 kubeadm.go:310] 
	I0729 14:44:40.131450 1039759 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 14:44:40.131499 1039759 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 14:44:40.131542 1039759 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 14:44:40.131552 1039759 kubeadm.go:310] 
	I0729 14:44:40.131686 1039759 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 14:44:40.131795 1039759 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 14:44:40.131806 1039759 kubeadm.go:310] 
	I0729 14:44:40.131947 1039759 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 14:44:40.132064 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 14:44:40.132162 1039759 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 14:44:40.132254 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 14:44:40.132264 1039759 kubeadm.go:310] 
	I0729 14:44:40.133208 1039759 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:44:40.133363 1039759 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 14:44:40.133468 1039759 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 14:44:40.133610 1039759 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 14:44:40.133676 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:44:40.607039 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:44:40.623771 1039759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:44:40.636278 1039759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:44:40.636310 1039759 kubeadm.go:157] found existing configuration files:
	
	I0729 14:44:40.636371 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:44:40.647768 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:44:40.647827 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:44:40.658281 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:44:40.668393 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:44:40.668477 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:44:40.678521 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:44:40.687891 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:44:40.687960 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:44:40.698384 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:44:40.708965 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:44:40.709047 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:44:40.719665 1039759 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:44:40.796786 1039759 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 14:44:40.796883 1039759 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:44:40.946106 1039759 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:44:40.946258 1039759 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:44:40.946388 1039759 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 14:44:41.140483 1039759 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:44:41.142390 1039759 out.go:204]   - Generating certificates and keys ...
	I0729 14:44:41.142503 1039759 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:44:41.142610 1039759 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:44:41.142722 1039759 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:44:41.142811 1039759 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:44:41.142910 1039759 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:44:41.142995 1039759 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:44:41.143092 1039759 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:44:41.143180 1039759 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:44:41.143279 1039759 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:44:41.143390 1039759 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:44:41.143445 1039759 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:44:41.143524 1039759 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:44:41.188854 1039759 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:44:41.329957 1039759 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:44:41.968599 1039759 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:44:42.034788 1039759 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:44:42.055543 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:44:42.056622 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:44:42.056715 1039759 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:44:42.204165 1039759 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:44:42.205935 1039759 out.go:204]   - Booting up control plane ...
	I0729 14:44:42.206076 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:44:42.216259 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:44:42.217947 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:44:42.219361 1039759 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:44:42.221672 1039759 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 14:45:22.223830 1039759 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 14:45:22.223940 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:22.224139 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:45:27.224303 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:27.224574 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:45:37.224905 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:37.225090 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:45:57.226285 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:57.226533 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:46:37.227279 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:46:37.227485 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:46:37.227494 1039759 kubeadm.go:310] 
	I0729 14:46:37.227528 1039759 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 14:46:37.227605 1039759 kubeadm.go:310] 		timed out waiting for the condition
	I0729 14:46:37.227627 1039759 kubeadm.go:310] 
	I0729 14:46:37.227683 1039759 kubeadm.go:310] 	This error is likely caused by:
	I0729 14:46:37.227732 1039759 kubeadm.go:310] 		- The kubelet is not running
	I0729 14:46:37.227861 1039759 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 14:46:37.227870 1039759 kubeadm.go:310] 
	I0729 14:46:37.228011 1039759 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 14:46:37.228093 1039759 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 14:46:37.228140 1039759 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 14:46:37.228173 1039759 kubeadm.go:310] 
	I0729 14:46:37.228310 1039759 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 14:46:37.228443 1039759 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 14:46:37.228454 1039759 kubeadm.go:310] 
	I0729 14:46:37.228606 1039759 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 14:46:37.228714 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 14:46:37.228821 1039759 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 14:46:37.228913 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 14:46:37.228934 1039759 kubeadm.go:310] 
	I0729 14:46:37.229926 1039759 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:46:37.230070 1039759 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 14:46:37.230175 1039759 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 14:46:37.230284 1039759 kubeadm.go:394] duration metric: took 7m57.608522587s to StartCluster
	I0729 14:46:37.230347 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:46:37.230435 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:46:37.276238 1039759 cri.go:89] found id: ""
	I0729 14:46:37.276294 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.276304 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:46:37.276317 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:46:37.276439 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:46:37.309934 1039759 cri.go:89] found id: ""
	I0729 14:46:37.309960 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.309969 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:46:37.309975 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:46:37.310031 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:46:37.343286 1039759 cri.go:89] found id: ""
	I0729 14:46:37.343312 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.343320 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:46:37.343325 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:46:37.343384 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:46:37.378735 1039759 cri.go:89] found id: ""
	I0729 14:46:37.378763 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.378773 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:46:37.378779 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:46:37.378834 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:46:37.414244 1039759 cri.go:89] found id: ""
	I0729 14:46:37.414275 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.414284 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:46:37.414290 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:46:37.414372 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:46:37.453809 1039759 cri.go:89] found id: ""
	I0729 14:46:37.453842 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.453858 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:46:37.453866 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:46:37.453955 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:46:37.492250 1039759 cri.go:89] found id: ""
	I0729 14:46:37.492279 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.492288 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:46:37.492294 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:46:37.492360 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:46:37.554342 1039759 cri.go:89] found id: ""
	I0729 14:46:37.554377 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.554388 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:46:37.554404 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:46:37.554422 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:46:37.631118 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:46:37.631165 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:46:37.650991 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:46:37.651047 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:46:37.731852 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:46:37.731880 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:46:37.731897 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:46:37.849049 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:46:37.849092 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0729 14:46:37.893957 1039759 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 14:46:37.894031 1039759 out.go:239] * 
	W0729 14:46:37.894120 1039759 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 14:46:37.894150 1039759 out.go:239] * 
	W0729 14:46:37.895278 1039759 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 14:46:37.898735 1039759 out.go:177] 
	W0729 14:46:37.900049 1039759 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 14:46:37.900115 1039759 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 14:46:37.900146 1039759 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 14:46:37.901531 1039759 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 14:55:43 old-k8s-version-360866 crio[647]: time="2024-07-29 14:55:43.108044208Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722264943108020331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=808e79a1-8b0c-45a6-8767-4c40f2fb011b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:55:43 old-k8s-version-360866 crio[647]: time="2024-07-29 14:55:43.108533216Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=79e297bc-9820-48fc-a8c0-72ff3d885e16 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:55:43 old-k8s-version-360866 crio[647]: time="2024-07-29 14:55:43.108604221Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=79e297bc-9820-48fc-a8c0-72ff3d885e16 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:55:43 old-k8s-version-360866 crio[647]: time="2024-07-29 14:55:43.108682026Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=79e297bc-9820-48fc-a8c0-72ff3d885e16 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:55:43 old-k8s-version-360866 crio[647]: time="2024-07-29 14:55:43.141117428Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=62d58fe8-91c9-4015-869f-cc942b7445a1 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:55:43 old-k8s-version-360866 crio[647]: time="2024-07-29 14:55:43.141226009Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=62d58fe8-91c9-4015-869f-cc942b7445a1 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:55:43 old-k8s-version-360866 crio[647]: time="2024-07-29 14:55:43.142386191Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b2f7e15c-51c1-46c0-9964-40b5f7816028 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:55:43 old-k8s-version-360866 crio[647]: time="2024-07-29 14:55:43.142848343Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722264943142805059,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b2f7e15c-51c1-46c0-9964-40b5f7816028 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:55:43 old-k8s-version-360866 crio[647]: time="2024-07-29 14:55:43.143580882Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f258634-0e7a-4e5e-b618-82aae661eeb4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:55:43 old-k8s-version-360866 crio[647]: time="2024-07-29 14:55:43.143716279Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f258634-0e7a-4e5e-b618-82aae661eeb4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:55:43 old-k8s-version-360866 crio[647]: time="2024-07-29 14:55:43.143772384Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0f258634-0e7a-4e5e-b618-82aae661eeb4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:55:43 old-k8s-version-360866 crio[647]: time="2024-07-29 14:55:43.174956896Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=93f6f872-247a-4a73-8d55-095deee7ba94 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:55:43 old-k8s-version-360866 crio[647]: time="2024-07-29 14:55:43.175092142Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=93f6f872-247a-4a73-8d55-095deee7ba94 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:55:43 old-k8s-version-360866 crio[647]: time="2024-07-29 14:55:43.176434960Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e9f8ea3-4a61-483c-9a76-00dbcaaeca3e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:55:43 old-k8s-version-360866 crio[647]: time="2024-07-29 14:55:43.176858383Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722264943176838467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e9f8ea3-4a61-483c-9a76-00dbcaaeca3e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:55:43 old-k8s-version-360866 crio[647]: time="2024-07-29 14:55:43.177360068Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=89d182a4-b132-40af-be9e-420fd2b8a6c2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:55:43 old-k8s-version-360866 crio[647]: time="2024-07-29 14:55:43.177438695Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=89d182a4-b132-40af-be9e-420fd2b8a6c2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:55:43 old-k8s-version-360866 crio[647]: time="2024-07-29 14:55:43.177476157Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=89d182a4-b132-40af-be9e-420fd2b8a6c2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:55:43 old-k8s-version-360866 crio[647]: time="2024-07-29 14:55:43.210231387Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=64a611be-40be-4f0b-b184-4805d6a3fa97 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:55:43 old-k8s-version-360866 crio[647]: time="2024-07-29 14:55:43.210332867Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=64a611be-40be-4f0b-b184-4805d6a3fa97 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:55:43 old-k8s-version-360866 crio[647]: time="2024-07-29 14:55:43.212445212Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2b1937c4-98ca-4391-b136-2f39c157d6d0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:55:43 old-k8s-version-360866 crio[647]: time="2024-07-29 14:55:43.212901546Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722264943212874559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b1937c4-98ca-4391-b136-2f39c157d6d0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:55:43 old-k8s-version-360866 crio[647]: time="2024-07-29 14:55:43.213403871Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ae65ea0-a776-41da-8fd1-81be1cfbe4cb name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:55:43 old-k8s-version-360866 crio[647]: time="2024-07-29 14:55:43.213463465Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ae65ea0-a776-41da-8fd1-81be1cfbe4cb name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:55:43 old-k8s-version-360866 crio[647]: time="2024-07-29 14:55:43.213494331Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0ae65ea0-a776-41da-8fd1-81be1cfbe4cb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul29 14:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057215] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.048160] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.116415] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.599440] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.593896] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.539359] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.065084] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070151] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.197280] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.136393] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.263438] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +6.439493] systemd-fstab-generator[833]: Ignoring "noauto" option for root device
	[  +0.060871] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.313828] systemd-fstab-generator[958]: Ignoring "noauto" option for root device
	[ +12.194292] kauditd_printk_skb: 46 callbacks suppressed
	[Jul29 14:42] systemd-fstab-generator[4994]: Ignoring "noauto" option for root device
	[Jul29 14:44] systemd-fstab-generator[5279]: Ignoring "noauto" option for root device
	[  +0.064843] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:55:43 up 17 min,  0 users,  load average: 0.06, 0.05, 0.04
	Linux old-k8s-version-360866 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 29 14:55:43 old-k8s-version-360866 kubelet[6460]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Jul 29 14:55:43 old-k8s-version-360866 kubelet[6460]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xd, 0xc000a8fb6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Jul 29 14:55:43 old-k8s-version-360866 kubelet[6460]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Jul 29 14:55:43 old-k8s-version-360866 kubelet[6460]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000c40ca0, 0x0, 0x0, 0x0)
	Jul 29 14:55:43 old-k8s-version-360866 kubelet[6460]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Jul 29 14:55:43 old-k8s-version-360866 kubelet[6460]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc0008d8000)
	Jul 29 14:55:43 old-k8s-version-360866 kubelet[6460]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Jul 29 14:55:43 old-k8s-version-360866 kubelet[6460]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Jul 29 14:55:43 old-k8s-version-360866 kubelet[6460]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Jul 29 14:55:43 old-k8s-version-360866 kubelet[6460]: goroutine 159 [chan receive]:
	Jul 29 14:55:43 old-k8s-version-360866 kubelet[6460]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*sharedProcessor).run(0xc0008bf1f0, 0xc00009f3e0)
	Jul 29 14:55:43 old-k8s-version-360866 kubelet[6460]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:628 +0x53
	Jul 29 14:55:43 old-k8s-version-360866 kubelet[6460]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jul 29 14:55:43 old-k8s-version-360866 kubelet[6460]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jul 29 14:55:43 old-k8s-version-360866 kubelet[6460]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000c2cf20, 0xc000be82a0)
	Jul 29 14:55:43 old-k8s-version-360866 kubelet[6460]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 29 14:55:43 old-k8s-version-360866 kubelet[6460]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 29 14:55:43 old-k8s-version-360866 kubelet[6460]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 29 14:55:43 old-k8s-version-360866 kubelet[6460]: goroutine 160 [chan receive]:
	Jul 29 14:55:43 old-k8s-version-360866 kubelet[6460]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc000c14990)
	Jul 29 14:55:43 old-k8s-version-360866 kubelet[6460]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Jul 29 14:55:43 old-k8s-version-360866 kubelet[6460]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Jul 29 14:55:43 old-k8s-version-360866 kubelet[6460]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Jul 29 14:55:43 old-k8s-version-360866 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 29 14:55:43 old-k8s-version-360866 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-360866 -n old-k8s-version-360866
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-360866 -n old-k8s-version-360866: exit status 2 (253.307917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-360866" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (474.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-668123 -n embed-certs-668123
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-29 14:59:31.092473322 +0000 UTC m=+6457.500197098
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-668123 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-668123 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.444µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-668123 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-668123 -n embed-certs-668123
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-668123 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-668123 logs -n 25: (1.158376664s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-513289 sudo crio                            | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-513289                                      | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	| delete  | -p                                                     | disable-driver-mounts-054967 | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | disable-driver-mounts-054967                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:31 UTC |
	|         | default-k8s-diff-port-751306                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-603534             | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:30 UTC | 29 Jul 24 14:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-603534                                   | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-668123            | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC | 29 Jul 24 14:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-668123                                  | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-751306  | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC | 29 Jul 24 14:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC |                     |
	|         | default-k8s-diff-port-751306                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-603534                  | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-603534 --memory=2200                     | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:32 UTC | 29 Jul 24 14:44 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-360866        | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:33 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-668123                 | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-668123                                  | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:33 UTC | 29 Jul 24 14:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-751306       | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC | 29 Jul 24 14:43 UTC |
	|         | default-k8s-diff-port-751306                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-360866                              | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC | 29 Jul 24 14:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-360866             | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC | 29 Jul 24 14:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-360866                              | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-360866                              | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:58 UTC | 29 Jul 24 14:58 UTC |
	| start   | -p newest-cni-342058 --memory=2200 --alsologtostderr   | newest-cni-342058            | jenkins | v1.33.1 | 29 Jul 24 14:58 UTC | 29 Jul 24 14:59 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p no-preload-603534                                   | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:58 UTC | 29 Jul 24 14:58 UTC |
	| addons  | enable metrics-server -p newest-cni-342058             | newest-cni-342058            | jenkins | v1.33.1 | 29 Jul 24 14:59 UTC | 29 Jul 24 14:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-342058                                   | newest-cni-342058            | jenkins | v1.33.1 | 29 Jul 24 14:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 14:58:35
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 14:58:35.824086 1045900 out.go:291] Setting OutFile to fd 1 ...
	I0729 14:58:35.824438 1045900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:58:35.824453 1045900 out.go:304] Setting ErrFile to fd 2...
	I0729 14:58:35.824460 1045900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:58:35.824649 1045900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 14:58:35.825261 1045900 out.go:298] Setting JSON to false
	I0729 14:58:35.826335 1045900 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":16868,"bootTime":1722248248,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 14:58:35.826397 1045900 start.go:139] virtualization: kvm guest
	I0729 14:58:35.828545 1045900 out.go:177] * [newest-cni-342058] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 14:58:35.829936 1045900 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 14:58:35.829988 1045900 notify.go:220] Checking for updates...
	I0729 14:58:35.832514 1045900 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 14:58:35.833801 1045900 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:58:35.834939 1045900 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:58:35.835984 1045900 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 14:58:35.836986 1045900 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 14:58:35.838590 1045900 config.go:182] Loaded profile config "default-k8s-diff-port-751306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:58:35.838700 1045900 config.go:182] Loaded profile config "embed-certs-668123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:58:35.838797 1045900 config.go:182] Loaded profile config "no-preload-603534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 14:58:35.838921 1045900 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 14:58:35.877174 1045900 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 14:58:35.878316 1045900 start.go:297] selected driver: kvm2
	I0729 14:58:35.878329 1045900 start.go:901] validating driver "kvm2" against <nil>
	I0729 14:58:35.878340 1045900 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 14:58:35.879051 1045900 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:58:35.879135 1045900 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19338-974764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 14:58:35.894392 1045900 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 14:58:35.894442 1045900 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0729 14:58:35.894471 1045900 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0729 14:58:35.894765 1045900 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 14:58:35.894815 1045900 cni.go:84] Creating CNI manager for ""
	I0729 14:58:35.894829 1045900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:58:35.894843 1045900 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 14:58:35.894928 1045900 start.go:340] cluster config:
	{Name:newest-cni-342058 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-342058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:58:35.895077 1045900 iso.go:125] acquiring lock: {Name:mk2bc72146110e230952d77b90cad2ea8182c9d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:58:35.896871 1045900 out.go:177] * Starting "newest-cni-342058" primary control-plane node in "newest-cni-342058" cluster
	I0729 14:58:35.898117 1045900 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 14:58:35.898152 1045900 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 14:58:35.898162 1045900 cache.go:56] Caching tarball of preloaded images
	I0729 14:58:35.898237 1045900 preload.go:172] Found /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 14:58:35.898250 1045900 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0729 14:58:35.898353 1045900 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/config.json ...
	I0729 14:58:35.898372 1045900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/config.json: {Name:mkc145ec5636537f5dfe60e5bf91f2b50771e489 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:58:35.898538 1045900 start.go:360] acquireMachinesLock for newest-cni-342058: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 14:58:35.898573 1045900 start.go:364] duration metric: took 19.613µs to acquireMachinesLock for "newest-cni-342058"
	I0729 14:58:35.898604 1045900 start.go:93] Provisioning new machine with config: &{Name:newest-cni-342058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-342058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:58:35.898687 1045900 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 14:58:35.901275 1045900 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 14:58:35.901412 1045900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:58:35.901452 1045900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:58:35.915503 1045900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37825
	I0729 14:58:35.915969 1045900 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:58:35.916538 1045900 main.go:141] libmachine: Using API Version  1
	I0729 14:58:35.916562 1045900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:58:35.916907 1045900 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:58:35.917084 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetMachineName
	I0729 14:58:35.917264 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .DriverName
	I0729 14:58:35.917445 1045900 start.go:159] libmachine.API.Create for "newest-cni-342058" (driver="kvm2")
	I0729 14:58:35.917472 1045900 client.go:168] LocalClient.Create starting
	I0729 14:58:35.917524 1045900 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem
	I0729 14:58:35.917556 1045900 main.go:141] libmachine: Decoding PEM data...
	I0729 14:58:35.917573 1045900 main.go:141] libmachine: Parsing certificate...
	I0729 14:58:35.917632 1045900 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem
	I0729 14:58:35.917650 1045900 main.go:141] libmachine: Decoding PEM data...
	I0729 14:58:35.917663 1045900 main.go:141] libmachine: Parsing certificate...
	I0729 14:58:35.917676 1045900 main.go:141] libmachine: Running pre-create checks...
	I0729 14:58:35.917690 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .PreCreateCheck
	I0729 14:58:35.918016 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetConfigRaw
	I0729 14:58:35.918429 1045900 main.go:141] libmachine: Creating machine...
	I0729 14:58:35.918442 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .Create
	I0729 14:58:35.918557 1045900 main.go:141] libmachine: (newest-cni-342058) Creating KVM machine...
	I0729 14:58:35.919791 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | found existing default KVM network
	I0729 14:58:35.921496 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | I0729 14:58:35.921333 1045923 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000112c20}
	I0729 14:58:35.921561 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | created network xml: 
	I0729 14:58:35.921586 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | <network>
	I0729 14:58:35.921600 1045900 main.go:141] libmachine: (newest-cni-342058) DBG |   <name>mk-newest-cni-342058</name>
	I0729 14:58:35.921619 1045900 main.go:141] libmachine: (newest-cni-342058) DBG |   <dns enable='no'/>
	I0729 14:58:35.921627 1045900 main.go:141] libmachine: (newest-cni-342058) DBG |   
	I0729 14:58:35.921633 1045900 main.go:141] libmachine: (newest-cni-342058) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 14:58:35.921641 1045900 main.go:141] libmachine: (newest-cni-342058) DBG |     <dhcp>
	I0729 14:58:35.921647 1045900 main.go:141] libmachine: (newest-cni-342058) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 14:58:35.921685 1045900 main.go:141] libmachine: (newest-cni-342058) DBG |     </dhcp>
	I0729 14:58:35.921702 1045900 main.go:141] libmachine: (newest-cni-342058) DBG |   </ip>
	I0729 14:58:35.921719 1045900 main.go:141] libmachine: (newest-cni-342058) DBG |   
	I0729 14:58:35.921727 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | </network>
	I0729 14:58:35.921735 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | 
	I0729 14:58:35.926379 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | trying to create private KVM network mk-newest-cni-342058 192.168.39.0/24...
	I0729 14:58:35.997269 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | private KVM network mk-newest-cni-342058 192.168.39.0/24 created
	I0729 14:58:35.997327 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | I0729 14:58:35.997235 1045923 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:58:35.997375 1045900 main.go:141] libmachine: (newest-cni-342058) Setting up store path in /home/jenkins/minikube-integration/19338-974764/.minikube/machines/newest-cni-342058 ...
	I0729 14:58:35.997407 1045900 main.go:141] libmachine: (newest-cni-342058) Building disk image from file:///home/jenkins/minikube-integration/19338-974764/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 14:58:35.997432 1045900 main.go:141] libmachine: (newest-cni-342058) Downloading /home/jenkins/minikube-integration/19338-974764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19338-974764/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 14:58:36.301349 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | I0729 14:58:36.301154 1045923 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/newest-cni-342058/id_rsa...
	I0729 14:58:36.466043 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | I0729 14:58:36.465920 1045923 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/newest-cni-342058/newest-cni-342058.rawdisk...
	I0729 14:58:36.466073 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | Writing magic tar header
	I0729 14:58:36.466086 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | Writing SSH key tar header
	I0729 14:58:36.466094 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | I0729 14:58:36.466058 1045923 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19338-974764/.minikube/machines/newest-cni-342058 ...
	I0729 14:58:36.466208 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/newest-cni-342058
	I0729 14:58:36.466234 1045900 main.go:141] libmachine: (newest-cni-342058) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube/machines/newest-cni-342058 (perms=drwx------)
	I0729 14:58:36.466246 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube/machines
	I0729 14:58:36.466260 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:58:36.466268 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-974764
	I0729 14:58:36.466288 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 14:58:36.466298 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | Checking permissions on dir: /home/jenkins
	I0729 14:58:36.466314 1045900 main.go:141] libmachine: (newest-cni-342058) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube/machines (perms=drwxr-xr-x)
	I0729 14:58:36.466328 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | Checking permissions on dir: /home
	I0729 14:58:36.466341 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | Skipping /home - not owner
	I0729 14:58:36.466351 1045900 main.go:141] libmachine: (newest-cni-342058) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764/.minikube (perms=drwxr-xr-x)
	I0729 14:58:36.466358 1045900 main.go:141] libmachine: (newest-cni-342058) Setting executable bit set on /home/jenkins/minikube-integration/19338-974764 (perms=drwxrwxr-x)
	I0729 14:58:36.466367 1045900 main.go:141] libmachine: (newest-cni-342058) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 14:58:36.466374 1045900 main.go:141] libmachine: (newest-cni-342058) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 14:58:36.466381 1045900 main.go:141] libmachine: (newest-cni-342058) Creating domain...
	I0729 14:58:36.467617 1045900 main.go:141] libmachine: (newest-cni-342058) define libvirt domain using xml: 
	I0729 14:58:36.467636 1045900 main.go:141] libmachine: (newest-cni-342058) <domain type='kvm'>
	I0729 14:58:36.467646 1045900 main.go:141] libmachine: (newest-cni-342058)   <name>newest-cni-342058</name>
	I0729 14:58:36.467654 1045900 main.go:141] libmachine: (newest-cni-342058)   <memory unit='MiB'>2200</memory>
	I0729 14:58:36.467663 1045900 main.go:141] libmachine: (newest-cni-342058)   <vcpu>2</vcpu>
	I0729 14:58:36.467672 1045900 main.go:141] libmachine: (newest-cni-342058)   <features>
	I0729 14:58:36.467681 1045900 main.go:141] libmachine: (newest-cni-342058)     <acpi/>
	I0729 14:58:36.467703 1045900 main.go:141] libmachine: (newest-cni-342058)     <apic/>
	I0729 14:58:36.467734 1045900 main.go:141] libmachine: (newest-cni-342058)     <pae/>
	I0729 14:58:36.467759 1045900 main.go:141] libmachine: (newest-cni-342058)     
	I0729 14:58:36.467766 1045900 main.go:141] libmachine: (newest-cni-342058)   </features>
	I0729 14:58:36.467774 1045900 main.go:141] libmachine: (newest-cni-342058)   <cpu mode='host-passthrough'>
	I0729 14:58:36.467788 1045900 main.go:141] libmachine: (newest-cni-342058)   
	I0729 14:58:36.467795 1045900 main.go:141] libmachine: (newest-cni-342058)   </cpu>
	I0729 14:58:36.467800 1045900 main.go:141] libmachine: (newest-cni-342058)   <os>
	I0729 14:58:36.467806 1045900 main.go:141] libmachine: (newest-cni-342058)     <type>hvm</type>
	I0729 14:58:36.467812 1045900 main.go:141] libmachine: (newest-cni-342058)     <boot dev='cdrom'/>
	I0729 14:58:36.467819 1045900 main.go:141] libmachine: (newest-cni-342058)     <boot dev='hd'/>
	I0729 14:58:36.467824 1045900 main.go:141] libmachine: (newest-cni-342058)     <bootmenu enable='no'/>
	I0729 14:58:36.467855 1045900 main.go:141] libmachine: (newest-cni-342058)   </os>
	I0729 14:58:36.467863 1045900 main.go:141] libmachine: (newest-cni-342058)   <devices>
	I0729 14:58:36.467868 1045900 main.go:141] libmachine: (newest-cni-342058)     <disk type='file' device='cdrom'>
	I0729 14:58:36.467877 1045900 main.go:141] libmachine: (newest-cni-342058)       <source file='/home/jenkins/minikube-integration/19338-974764/.minikube/machines/newest-cni-342058/boot2docker.iso'/>
	I0729 14:58:36.467884 1045900 main.go:141] libmachine: (newest-cni-342058)       <target dev='hdc' bus='scsi'/>
	I0729 14:58:36.467889 1045900 main.go:141] libmachine: (newest-cni-342058)       <readonly/>
	I0729 14:58:36.467894 1045900 main.go:141] libmachine: (newest-cni-342058)     </disk>
	I0729 14:58:36.467900 1045900 main.go:141] libmachine: (newest-cni-342058)     <disk type='file' device='disk'>
	I0729 14:58:36.467908 1045900 main.go:141] libmachine: (newest-cni-342058)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 14:58:36.467916 1045900 main.go:141] libmachine: (newest-cni-342058)       <source file='/home/jenkins/minikube-integration/19338-974764/.minikube/machines/newest-cni-342058/newest-cni-342058.rawdisk'/>
	I0729 14:58:36.467923 1045900 main.go:141] libmachine: (newest-cni-342058)       <target dev='hda' bus='virtio'/>
	I0729 14:58:36.467928 1045900 main.go:141] libmachine: (newest-cni-342058)     </disk>
	I0729 14:58:36.467935 1045900 main.go:141] libmachine: (newest-cni-342058)     <interface type='network'>
	I0729 14:58:36.467941 1045900 main.go:141] libmachine: (newest-cni-342058)       <source network='mk-newest-cni-342058'/>
	I0729 14:58:36.467954 1045900 main.go:141] libmachine: (newest-cni-342058)       <model type='virtio'/>
	I0729 14:58:36.467960 1045900 main.go:141] libmachine: (newest-cni-342058)     </interface>
	I0729 14:58:36.467968 1045900 main.go:141] libmachine: (newest-cni-342058)     <interface type='network'>
	I0729 14:58:36.467991 1045900 main.go:141] libmachine: (newest-cni-342058)       <source network='default'/>
	I0729 14:58:36.468012 1045900 main.go:141] libmachine: (newest-cni-342058)       <model type='virtio'/>
	I0729 14:58:36.468021 1045900 main.go:141] libmachine: (newest-cni-342058)     </interface>
	I0729 14:58:36.468030 1045900 main.go:141] libmachine: (newest-cni-342058)     <serial type='pty'>
	I0729 14:58:36.468040 1045900 main.go:141] libmachine: (newest-cni-342058)       <target port='0'/>
	I0729 14:58:36.468052 1045900 main.go:141] libmachine: (newest-cni-342058)     </serial>
	I0729 14:58:36.468061 1045900 main.go:141] libmachine: (newest-cni-342058)     <console type='pty'>
	I0729 14:58:36.468075 1045900 main.go:141] libmachine: (newest-cni-342058)       <target type='serial' port='0'/>
	I0729 14:58:36.468084 1045900 main.go:141] libmachine: (newest-cni-342058)     </console>
	I0729 14:58:36.468088 1045900 main.go:141] libmachine: (newest-cni-342058)     <rng model='virtio'>
	I0729 14:58:36.468095 1045900 main.go:141] libmachine: (newest-cni-342058)       <backend model='random'>/dev/random</backend>
	I0729 14:58:36.468100 1045900 main.go:141] libmachine: (newest-cni-342058)     </rng>
	I0729 14:58:36.468107 1045900 main.go:141] libmachine: (newest-cni-342058)     
	I0729 14:58:36.468111 1045900 main.go:141] libmachine: (newest-cni-342058)     
	I0729 14:58:36.468116 1045900 main.go:141] libmachine: (newest-cni-342058)   </devices>
	I0729 14:58:36.468121 1045900 main.go:141] libmachine: (newest-cni-342058) </domain>
	I0729 14:58:36.468129 1045900 main.go:141] libmachine: (newest-cni-342058) 
	I0729 14:58:36.472481 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:30:a4:05 in network default
	I0729 14:58:36.473026 1045900 main.go:141] libmachine: (newest-cni-342058) Ensuring networks are active...
	I0729 14:58:36.473047 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:36.473610 1045900 main.go:141] libmachine: (newest-cni-342058) Ensuring network default is active
	I0729 14:58:36.473861 1045900 main.go:141] libmachine: (newest-cni-342058) Ensuring network mk-newest-cni-342058 is active
	I0729 14:58:36.474347 1045900 main.go:141] libmachine: (newest-cni-342058) Getting domain xml...
	I0729 14:58:36.475112 1045900 main.go:141] libmachine: (newest-cni-342058) Creating domain...
	I0729 14:58:36.809640 1045900 main.go:141] libmachine: (newest-cni-342058) Waiting to get IP...
	I0729 14:58:36.810641 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:36.811140 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | unable to find current IP address of domain newest-cni-342058 in network mk-newest-cni-342058
	I0729 14:58:36.811200 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | I0729 14:58:36.811143 1045923 retry.go:31] will retry after 285.895043ms: waiting for machine to come up
	I0729 14:58:37.098546 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:37.099028 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | unable to find current IP address of domain newest-cni-342058 in network mk-newest-cni-342058
	I0729 14:58:37.099051 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | I0729 14:58:37.098980 1045923 retry.go:31] will retry after 244.107511ms: waiting for machine to come up
	I0729 14:58:37.344484 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:37.345041 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | unable to find current IP address of domain newest-cni-342058 in network mk-newest-cni-342058
	I0729 14:58:37.345073 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | I0729 14:58:37.344990 1045923 retry.go:31] will retry after 454.828771ms: waiting for machine to come up
	I0729 14:58:37.801682 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:37.802172 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | unable to find current IP address of domain newest-cni-342058 in network mk-newest-cni-342058
	I0729 14:58:37.802204 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | I0729 14:58:37.802126 1045923 retry.go:31] will retry after 390.430451ms: waiting for machine to come up
	I0729 14:58:38.194749 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:38.195213 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | unable to find current IP address of domain newest-cni-342058 in network mk-newest-cni-342058
	I0729 14:58:38.195243 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | I0729 14:58:38.195159 1045923 retry.go:31] will retry after 651.053616ms: waiting for machine to come up
	I0729 14:58:38.847895 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:38.848361 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | unable to find current IP address of domain newest-cni-342058 in network mk-newest-cni-342058
	I0729 14:58:38.848376 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | I0729 14:58:38.848334 1045923 retry.go:31] will retry after 591.258195ms: waiting for machine to come up
	I0729 14:58:39.441135 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:39.441648 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | unable to find current IP address of domain newest-cni-342058 in network mk-newest-cni-342058
	I0729 14:58:39.441674 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | I0729 14:58:39.441596 1045923 retry.go:31] will retry after 826.359711ms: waiting for machine to come up
	I0729 14:58:40.269573 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:40.270083 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | unable to find current IP address of domain newest-cni-342058 in network mk-newest-cni-342058
	I0729 14:58:40.270108 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | I0729 14:58:40.270052 1045923 retry.go:31] will retry after 1.424643414s: waiting for machine to come up
	I0729 14:58:41.695912 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:41.696383 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | unable to find current IP address of domain newest-cni-342058 in network mk-newest-cni-342058
	I0729 14:58:41.696437 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | I0729 14:58:41.696336 1045923 retry.go:31] will retry after 1.558180936s: waiting for machine to come up
	I0729 14:58:43.256764 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:43.257199 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | unable to find current IP address of domain newest-cni-342058 in network mk-newest-cni-342058
	I0729 14:58:43.257230 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | I0729 14:58:43.257148 1045923 retry.go:31] will retry after 2.102722519s: waiting for machine to come up
	I0729 14:58:45.361724 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:45.362297 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | unable to find current IP address of domain newest-cni-342058 in network mk-newest-cni-342058
	I0729 14:58:45.362322 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | I0729 14:58:45.362240 1045923 retry.go:31] will retry after 2.846761765s: waiting for machine to come up
	I0729 14:58:48.210169 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:48.210577 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | unable to find current IP address of domain newest-cni-342058 in network mk-newest-cni-342058
	I0729 14:58:48.210607 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | I0729 14:58:48.210517 1045923 retry.go:31] will retry after 2.459470985s: waiting for machine to come up
	I0729 14:58:50.673017 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:50.673450 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | unable to find current IP address of domain newest-cni-342058 in network mk-newest-cni-342058
	I0729 14:58:50.673480 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | I0729 14:58:50.673397 1045923 retry.go:31] will retry after 3.161108847s: waiting for machine to come up
	I0729 14:58:53.838090 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:53.838528 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | unable to find current IP address of domain newest-cni-342058 in network mk-newest-cni-342058
	I0729 14:58:53.838558 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | I0729 14:58:53.838480 1045923 retry.go:31] will retry after 4.051631782s: waiting for machine to come up
	I0729 14:58:57.892340 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:57.892723 1045900 main.go:141] libmachine: (newest-cni-342058) Found IP for machine: 192.168.39.180
	I0729 14:58:57.892746 1045900 main.go:141] libmachine: (newest-cni-342058) Reserving static IP address...
	I0729 14:58:57.892786 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has current primary IP address 192.168.39.180 and MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:57.893110 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | unable to find host DHCP lease matching {name: "newest-cni-342058", mac: "52:54:00:1c:3f:6b", ip: "192.168.39.180"} in network mk-newest-cni-342058
	I0729 14:58:57.972102 1045900 main.go:141] libmachine: (newest-cni-342058) Reserved static IP address: 192.168.39.180
	I0729 14:58:57.972138 1045900 main.go:141] libmachine: (newest-cni-342058) Waiting for SSH to be available...
	I0729 14:58:57.972150 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | Getting to WaitForSSH function...
	I0729 14:58:57.975102 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:57.975533 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:3f:6b", ip: ""} in network mk-newest-cni-342058: {Iface:virbr3 ExpiryTime:2024-07-29 15:58:49 +0000 UTC Type:0 Mac:52:54:00:1c:3f:6b Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1c:3f:6b}
	I0729 14:58:57.975576 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined IP address 192.168.39.180 and MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:57.975647 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | Using SSH client type: external
	I0729 14:58:57.975674 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/newest-cni-342058/id_rsa (-rw-------)
	I0729 14:58:57.975775 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/newest-cni-342058/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:58:57.975816 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | About to run SSH command:
	I0729 14:58:57.975840 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | exit 0
	I0729 14:58:58.100826 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | SSH cmd err, output: <nil>: 
	I0729 14:58:58.101119 1045900 main.go:141] libmachine: (newest-cni-342058) KVM machine creation complete!
	I0729 14:58:58.101457 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetConfigRaw
	I0729 14:58:58.101998 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .DriverName
	I0729 14:58:58.102200 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .DriverName
	I0729 14:58:58.102364 1045900 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 14:58:58.102380 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetState
	I0729 14:58:58.103846 1045900 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 14:58:58.103875 1045900 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 14:58:58.103884 1045900 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 14:58:58.103892 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHHostname
	I0729 14:58:58.106133 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:58.106553 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:3f:6b", ip: ""} in network mk-newest-cni-342058: {Iface:virbr3 ExpiryTime:2024-07-29 15:58:49 +0000 UTC Type:0 Mac:52:54:00:1c:3f:6b Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:newest-cni-342058 Clientid:01:52:54:00:1c:3f:6b}
	I0729 14:58:58.106589 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined IP address 192.168.39.180 and MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:58.106761 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHPort
	I0729 14:58:58.106962 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHKeyPath
	I0729 14:58:58.107126 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHKeyPath
	I0729 14:58:58.107272 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHUsername
	I0729 14:58:58.107455 1045900 main.go:141] libmachine: Using SSH client type: native
	I0729 14:58:58.107668 1045900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0729 14:58:58.107678 1045900 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 14:58:58.203754 1045900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:58:58.203775 1045900 main.go:141] libmachine: Detecting the provisioner...
	I0729 14:58:58.203783 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHHostname
	I0729 14:58:58.206741 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:58.207117 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:3f:6b", ip: ""} in network mk-newest-cni-342058: {Iface:virbr3 ExpiryTime:2024-07-29 15:58:49 +0000 UTC Type:0 Mac:52:54:00:1c:3f:6b Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:newest-cni-342058 Clientid:01:52:54:00:1c:3f:6b}
	I0729 14:58:58.207163 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined IP address 192.168.39.180 and MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:58.207270 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHPort
	I0729 14:58:58.207472 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHKeyPath
	I0729 14:58:58.207623 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHKeyPath
	I0729 14:58:58.207735 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHUsername
	I0729 14:58:58.207879 1045900 main.go:141] libmachine: Using SSH client type: native
	I0729 14:58:58.208057 1045900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0729 14:58:58.208068 1045900 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 14:58:58.309134 1045900 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 14:58:58.309271 1045900 main.go:141] libmachine: found compatible host: buildroot
	I0729 14:58:58.309282 1045900 main.go:141] libmachine: Provisioning with buildroot...
	I0729 14:58:58.309290 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetMachineName
	I0729 14:58:58.309529 1045900 buildroot.go:166] provisioning hostname "newest-cni-342058"
	I0729 14:58:58.309564 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetMachineName
	I0729 14:58:58.309778 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHHostname
	I0729 14:58:58.312298 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:58.312684 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:3f:6b", ip: ""} in network mk-newest-cni-342058: {Iface:virbr3 ExpiryTime:2024-07-29 15:58:49 +0000 UTC Type:0 Mac:52:54:00:1c:3f:6b Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:newest-cni-342058 Clientid:01:52:54:00:1c:3f:6b}
	I0729 14:58:58.312709 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined IP address 192.168.39.180 and MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:58.312872 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHPort
	I0729 14:58:58.313067 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHKeyPath
	I0729 14:58:58.313241 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHKeyPath
	I0729 14:58:58.313393 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHUsername
	I0729 14:58:58.313556 1045900 main.go:141] libmachine: Using SSH client type: native
	I0729 14:58:58.313722 1045900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0729 14:58:58.313732 1045900 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-342058 && echo "newest-cni-342058" | sudo tee /etc/hostname
	I0729 14:58:58.428199 1045900 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-342058
	
	I0729 14:58:58.428231 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHHostname
	I0729 14:58:58.430862 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:58.431218 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:3f:6b", ip: ""} in network mk-newest-cni-342058: {Iface:virbr3 ExpiryTime:2024-07-29 15:58:49 +0000 UTC Type:0 Mac:52:54:00:1c:3f:6b Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:newest-cni-342058 Clientid:01:52:54:00:1c:3f:6b}
	I0729 14:58:58.431257 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined IP address 192.168.39.180 and MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:58.431387 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHPort
	I0729 14:58:58.431596 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHKeyPath
	I0729 14:58:58.431761 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHKeyPath
	I0729 14:58:58.431914 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHUsername
	I0729 14:58:58.432117 1045900 main.go:141] libmachine: Using SSH client type: native
	I0729 14:58:58.432300 1045900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0729 14:58:58.432316 1045900 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-342058' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-342058/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-342058' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:58:58.537342 1045900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:58:58.537377 1045900 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:58:58.537401 1045900 buildroot.go:174] setting up certificates
	I0729 14:58:58.537411 1045900 provision.go:84] configureAuth start
	I0729 14:58:58.537420 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetMachineName
	I0729 14:58:58.537735 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetIP
	I0729 14:58:58.540474 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:58.540826 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:3f:6b", ip: ""} in network mk-newest-cni-342058: {Iface:virbr3 ExpiryTime:2024-07-29 15:58:49 +0000 UTC Type:0 Mac:52:54:00:1c:3f:6b Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:newest-cni-342058 Clientid:01:52:54:00:1c:3f:6b}
	I0729 14:58:58.540859 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined IP address 192.168.39.180 and MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:58.541022 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHHostname
	I0729 14:58:58.543147 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:58.543482 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:3f:6b", ip: ""} in network mk-newest-cni-342058: {Iface:virbr3 ExpiryTime:2024-07-29 15:58:49 +0000 UTC Type:0 Mac:52:54:00:1c:3f:6b Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:newest-cni-342058 Clientid:01:52:54:00:1c:3f:6b}
	I0729 14:58:58.543511 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined IP address 192.168.39.180 and MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:58.543639 1045900 provision.go:143] copyHostCerts
	I0729 14:58:58.543723 1045900 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:58:58.543737 1045900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:58:58.543816 1045900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:58:58.543944 1045900 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:58:58.543956 1045900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:58:58.543995 1045900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:58:58.544119 1045900 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:58:58.544131 1045900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:58:58.544172 1045900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:58:58.544247 1045900 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.newest-cni-342058 san=[127.0.0.1 192.168.39.180 localhost minikube newest-cni-342058]
	I0729 14:58:58.682890 1045900 provision.go:177] copyRemoteCerts
	I0729 14:58:58.682983 1045900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:58:58.683017 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHHostname
	I0729 14:58:58.685728 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:58.686074 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:3f:6b", ip: ""} in network mk-newest-cni-342058: {Iface:virbr3 ExpiryTime:2024-07-29 15:58:49 +0000 UTC Type:0 Mac:52:54:00:1c:3f:6b Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:newest-cni-342058 Clientid:01:52:54:00:1c:3f:6b}
	I0729 14:58:58.686096 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined IP address 192.168.39.180 and MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:58.686318 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHPort
	I0729 14:58:58.686543 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHKeyPath
	I0729 14:58:58.686703 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHUsername
	I0729 14:58:58.686869 1045900 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/newest-cni-342058/id_rsa Username:docker}
	I0729 14:58:58.766802 1045900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:58:58.792041 1045900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 14:58:58.817256 1045900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:58:58.844035 1045900 provision.go:87] duration metric: took 306.608464ms to configureAuth
	I0729 14:58:58.844067 1045900 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:58:58.844282 1045900 config.go:182] Loaded profile config "newest-cni-342058": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 14:58:58.844372 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHHostname
	I0729 14:58:58.847147 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:58.847523 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:3f:6b", ip: ""} in network mk-newest-cni-342058: {Iface:virbr3 ExpiryTime:2024-07-29 15:58:49 +0000 UTC Type:0 Mac:52:54:00:1c:3f:6b Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:newest-cni-342058 Clientid:01:52:54:00:1c:3f:6b}
	I0729 14:58:58.847553 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined IP address 192.168.39.180 and MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:58.847740 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHPort
	I0729 14:58:58.847948 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHKeyPath
	I0729 14:58:58.848134 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHKeyPath
	I0729 14:58:58.848345 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHUsername
	I0729 14:58:58.848589 1045900 main.go:141] libmachine: Using SSH client type: native
	I0729 14:58:58.848813 1045900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0729 14:58:58.848840 1045900 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:58:59.110512 1045900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:58:59.110537 1045900 main.go:141] libmachine: Checking connection to Docker...
	I0729 14:58:59.110545 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetURL
	I0729 14:58:59.111997 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | Using libvirt version 6000000
	I0729 14:58:59.114461 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:59.114906 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:3f:6b", ip: ""} in network mk-newest-cni-342058: {Iface:virbr3 ExpiryTime:2024-07-29 15:58:49 +0000 UTC Type:0 Mac:52:54:00:1c:3f:6b Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:newest-cni-342058 Clientid:01:52:54:00:1c:3f:6b}
	I0729 14:58:59.114936 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined IP address 192.168.39.180 and MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:59.115106 1045900 main.go:141] libmachine: Docker is up and running!
	I0729 14:58:59.115121 1045900 main.go:141] libmachine: Reticulating splines...
	I0729 14:58:59.115128 1045900 client.go:171] duration metric: took 23.197647041s to LocalClient.Create
	I0729 14:58:59.115150 1045900 start.go:167] duration metric: took 23.197707126s to libmachine.API.Create "newest-cni-342058"
	I0729 14:58:59.115159 1045900 start.go:293] postStartSetup for "newest-cni-342058" (driver="kvm2")
	I0729 14:58:59.115171 1045900 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:58:59.115188 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .DriverName
	I0729 14:58:59.115455 1045900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:58:59.115485 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHHostname
	I0729 14:58:59.117739 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:59.118065 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:3f:6b", ip: ""} in network mk-newest-cni-342058: {Iface:virbr3 ExpiryTime:2024-07-29 15:58:49 +0000 UTC Type:0 Mac:52:54:00:1c:3f:6b Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:newest-cni-342058 Clientid:01:52:54:00:1c:3f:6b}
	I0729 14:58:59.118086 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined IP address 192.168.39.180 and MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:59.118227 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHPort
	I0729 14:58:59.118414 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHKeyPath
	I0729 14:58:59.118588 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHUsername
	I0729 14:58:59.118747 1045900 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/newest-cni-342058/id_rsa Username:docker}
	I0729 14:58:59.198987 1045900 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:58:59.203695 1045900 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:58:59.203721 1045900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:58:59.203796 1045900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:58:59.203890 1045900 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:58:59.204003 1045900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:58:59.215917 1045900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:58:59.241885 1045900 start.go:296] duration metric: took 126.710007ms for postStartSetup
	I0729 14:58:59.241937 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetConfigRaw
	I0729 14:58:59.242521 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetIP
	I0729 14:58:59.245281 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:59.245570 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:3f:6b", ip: ""} in network mk-newest-cni-342058: {Iface:virbr3 ExpiryTime:2024-07-29 15:58:49 +0000 UTC Type:0 Mac:52:54:00:1c:3f:6b Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:newest-cni-342058 Clientid:01:52:54:00:1c:3f:6b}
	I0729 14:58:59.245608 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined IP address 192.168.39.180 and MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:59.245789 1045900 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/config.json ...
	I0729 14:58:59.245983 1045900 start.go:128] duration metric: took 23.347283107s to createHost
	I0729 14:58:59.246007 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHHostname
	I0729 14:58:59.248345 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:59.248683 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:3f:6b", ip: ""} in network mk-newest-cni-342058: {Iface:virbr3 ExpiryTime:2024-07-29 15:58:49 +0000 UTC Type:0 Mac:52:54:00:1c:3f:6b Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:newest-cni-342058 Clientid:01:52:54:00:1c:3f:6b}
	I0729 14:58:59.248707 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined IP address 192.168.39.180 and MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:59.248868 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHPort
	I0729 14:58:59.249063 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHKeyPath
	I0729 14:58:59.249220 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHKeyPath
	I0729 14:58:59.249317 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHUsername
	I0729 14:58:59.249451 1045900 main.go:141] libmachine: Using SSH client type: native
	I0729 14:58:59.249618 1045900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0729 14:58:59.249639 1045900 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:58:59.345171 1045900 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722265139.320546286
	
	I0729 14:58:59.345195 1045900 fix.go:216] guest clock: 1722265139.320546286
	I0729 14:58:59.345204 1045900 fix.go:229] Guest: 2024-07-29 14:58:59.320546286 +0000 UTC Remote: 2024-07-29 14:58:59.245995081 +0000 UTC m=+23.459545654 (delta=74.551205ms)
	I0729 14:58:59.345249 1045900 fix.go:200] guest clock delta is within tolerance: 74.551205ms
	I0729 14:58:59.345256 1045900 start.go:83] releasing machines lock for "newest-cni-342058", held for 23.446664506s
	I0729 14:58:59.345296 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .DriverName
	I0729 14:58:59.345582 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetIP
	I0729 14:58:59.348002 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:59.348364 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:3f:6b", ip: ""} in network mk-newest-cni-342058: {Iface:virbr3 ExpiryTime:2024-07-29 15:58:49 +0000 UTC Type:0 Mac:52:54:00:1c:3f:6b Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:newest-cni-342058 Clientid:01:52:54:00:1c:3f:6b}
	I0729 14:58:59.348389 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined IP address 192.168.39.180 and MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:59.348599 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .DriverName
	I0729 14:58:59.349077 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .DriverName
	I0729 14:58:59.349283 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .DriverName
	I0729 14:58:59.349400 1045900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:58:59.349436 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHHostname
	I0729 14:58:59.349549 1045900 ssh_runner.go:195] Run: cat /version.json
	I0729 14:58:59.349580 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHHostname
	I0729 14:58:59.351927 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:59.352191 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:3f:6b", ip: ""} in network mk-newest-cni-342058: {Iface:virbr3 ExpiryTime:2024-07-29 15:58:49 +0000 UTC Type:0 Mac:52:54:00:1c:3f:6b Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:newest-cni-342058 Clientid:01:52:54:00:1c:3f:6b}
	I0729 14:58:59.352218 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined IP address 192.168.39.180 and MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:59.352309 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHPort
	I0729 14:58:59.352343 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:59.352530 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHKeyPath
	I0729 14:58:59.352680 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:3f:6b", ip: ""} in network mk-newest-cni-342058: {Iface:virbr3 ExpiryTime:2024-07-29 15:58:49 +0000 UTC Type:0 Mac:52:54:00:1c:3f:6b Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:newest-cni-342058 Clientid:01:52:54:00:1c:3f:6b}
	I0729 14:58:59.352702 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined IP address 192.168.39.180 and MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:58:59.352711 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHUsername
	I0729 14:58:59.352841 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHPort
	I0729 14:58:59.352902 1045900 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/newest-cni-342058/id_rsa Username:docker}
	I0729 14:58:59.352990 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHKeyPath
	I0729 14:58:59.353128 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHUsername
	I0729 14:58:59.353271 1045900 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/newest-cni-342058/id_rsa Username:docker}
	I0729 14:58:59.425598 1045900 ssh_runner.go:195] Run: systemctl --version
	I0729 14:58:59.448842 1045900 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:58:59.616147 1045900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:58:59.622541 1045900 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:58:59.622617 1045900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:58:59.638475 1045900 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:58:59.638497 1045900 start.go:495] detecting cgroup driver to use...
	I0729 14:58:59.638573 1045900 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:58:59.655178 1045900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:58:59.669404 1045900 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:58:59.669465 1045900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:58:59.682558 1045900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:58:59.695520 1045900 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:58:59.817984 1045900 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:58:59.985257 1045900 docker.go:233] disabling docker service ...
	I0729 14:58:59.985328 1045900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:58:59.998973 1045900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:59:00.012116 1045900 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:59:00.147928 1045900 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:59:00.275964 1045900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:59:00.291672 1045900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:59:00.312468 1045900 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 14:59:00.312533 1045900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:59:00.322996 1045900 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:59:00.323074 1045900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:59:00.333126 1045900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:59:00.343451 1045900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:59:00.353633 1045900 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:59:00.364443 1045900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:59:00.374624 1045900 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:59:00.392794 1045900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:59:00.403117 1045900 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:59:00.412509 1045900 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:59:00.412579 1045900 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:59:00.424943 1045900 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:59:00.434330 1045900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:59:00.559401 1045900 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:59:00.704196 1045900 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:59:00.704270 1045900 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:59:00.708984 1045900 start.go:563] Will wait 60s for crictl version
	I0729 14:59:00.709037 1045900 ssh_runner.go:195] Run: which crictl
	I0729 14:59:00.712931 1045900 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:59:00.752621 1045900 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:59:00.752696 1045900 ssh_runner.go:195] Run: crio --version
	I0729 14:59:00.781882 1045900 ssh_runner.go:195] Run: crio --version
	I0729 14:59:00.813157 1045900 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 14:59:00.814337 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetIP
	I0729 14:59:00.816907 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:59:00.817215 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:3f:6b", ip: ""} in network mk-newest-cni-342058: {Iface:virbr3 ExpiryTime:2024-07-29 15:58:49 +0000 UTC Type:0 Mac:52:54:00:1c:3f:6b Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:newest-cni-342058 Clientid:01:52:54:00:1c:3f:6b}
	I0729 14:59:00.817240 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined IP address 192.168.39.180 and MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:59:00.817482 1045900 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 14:59:00.822820 1045900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:59:00.836264 1045900 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0729 14:59:00.838134 1045900 kubeadm.go:883] updating cluster {Name:newest-cni-342058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:newest-cni-342058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:59:00.838298 1045900 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 14:59:00.838388 1045900 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:59:00.871030 1045900 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 14:59:00.871103 1045900 ssh_runner.go:195] Run: which lz4
	I0729 14:59:00.875159 1045900 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 14:59:00.879482 1045900 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 14:59:00.879509 1045900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (387176433 bytes)
	I0729 14:59:02.246410 1045900 crio.go:462] duration metric: took 1.371295845s to copy over tarball
	I0729 14:59:02.246482 1045900 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 14:59:04.253240 1045900 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.006699468s)
	I0729 14:59:04.253276 1045900 crio.go:469] duration metric: took 2.006836666s to extract the tarball
	I0729 14:59:04.253287 1045900 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 14:59:04.291526 1045900 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:59:04.338431 1045900 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 14:59:04.338456 1045900 cache_images.go:84] Images are preloaded, skipping loading
	I0729 14:59:04.338466 1045900 kubeadm.go:934] updating node { 192.168.39.180 8443 v1.31.0-beta.0 crio true true} ...
	I0729 14:59:04.338641 1045900 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-342058 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-342058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:59:04.338720 1045900 ssh_runner.go:195] Run: crio config
	I0729 14:59:04.390488 1045900 cni.go:84] Creating CNI manager for ""
	I0729 14:59:04.390517 1045900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:59:04.390531 1045900 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0729 14:59:04.390565 1045900 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.180 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-342058 NodeName:newest-cni-342058 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] Feature
Args:map[] NodeIP:192.168.39.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 14:59:04.390706 1045900 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-342058"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:59:04.390773 1045900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 14:59:04.401725 1045900 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:59:04.401795 1045900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:59:04.412461 1045900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0729 14:59:04.429098 1045900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 14:59:04.445181 1045900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0729 14:59:04.462031 1045900 ssh_runner.go:195] Run: grep 192.168.39.180	control-plane.minikube.internal$ /etc/hosts
	I0729 14:59:04.466219 1045900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:59:04.478529 1045900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:59:04.619001 1045900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:59:04.636128 1045900 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058 for IP: 192.168.39.180
	I0729 14:59:04.636155 1045900 certs.go:194] generating shared ca certs ...
	I0729 14:59:04.636178 1045900 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:59:04.636440 1045900 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:59:04.636499 1045900 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:59:04.636512 1045900 certs.go:256] generating profile certs ...
	I0729 14:59:04.636593 1045900 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/client.key
	I0729 14:59:04.636618 1045900 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/client.crt with IP's: []
	I0729 14:59:04.942343 1045900 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/client.crt ...
	I0729 14:59:04.942376 1045900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/client.crt: {Name:mkf89a800d65a5a5c2142fafef924411a33ceeaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:59:04.942553 1045900 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/client.key ...
	I0729 14:59:04.942564 1045900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/client.key: {Name:mk0a4343e7c549d305efc90b4df170317bfd40bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:59:04.942640 1045900 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/apiserver.key.529bec86
	I0729 14:59:04.942654 1045900 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/apiserver.crt.529bec86 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.180]
	I0729 14:59:05.051549 1045900 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/apiserver.crt.529bec86 ...
	I0729 14:59:05.051582 1045900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/apiserver.crt.529bec86: {Name:mkd638f303cb6fcebcefc2854d9b3954636f4565 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:59:05.051743 1045900 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/apiserver.key.529bec86 ...
	I0729 14:59:05.051756 1045900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/apiserver.key.529bec86: {Name:mkc099d7de1fc8e5333b356fd78bffcc98b10336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:59:05.051826 1045900 certs.go:381] copying /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/apiserver.crt.529bec86 -> /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/apiserver.crt
	I0729 14:59:05.051921 1045900 certs.go:385] copying /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/apiserver.key.529bec86 -> /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/apiserver.key
	I0729 14:59:05.051980 1045900 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/proxy-client.key
	I0729 14:59:05.052003 1045900 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/proxy-client.crt with IP's: []
	I0729 14:59:05.278257 1045900 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/proxy-client.crt ...
	I0729 14:59:05.278293 1045900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/proxy-client.crt: {Name:mk4a67a8e1b744b96be729361baa26406f592a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:59:05.278477 1045900 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/proxy-client.key ...
	I0729 14:59:05.278496 1045900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/proxy-client.key: {Name:mk595a2401f9dc3504fb6b79fd22b03357e30bc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:59:05.278744 1045900 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:59:05.278797 1045900 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:59:05.278828 1045900 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:59:05.278864 1045900 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:59:05.278896 1045900 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:59:05.278931 1045900 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:59:05.278981 1045900 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:59:05.279720 1045900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:59:05.307568 1045900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:59:05.334199 1045900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:59:05.360039 1045900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:59:05.384921 1045900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 14:59:05.417080 1045900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 14:59:05.444954 1045900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:59:05.475099 1045900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 14:59:05.498738 1045900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:59:05.522662 1045900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:59:05.546276 1045900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:59:05.570382 1045900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:59:05.587004 1045900 ssh_runner.go:195] Run: openssl version
	I0729 14:59:05.592889 1045900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:59:05.603505 1045900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:59:05.608181 1045900 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:59:05.608246 1045900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:59:05.614125 1045900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:59:05.625196 1045900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:59:05.635748 1045900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:59:05.640200 1045900 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:59:05.640250 1045900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:59:05.646244 1045900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:59:05.657433 1045900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:59:05.668341 1045900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:59:05.672948 1045900 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:59:05.672995 1045900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:59:05.678874 1045900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:59:05.689810 1045900 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:59:05.694954 1045900 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 14:59:05.695030 1045900 kubeadm.go:392] StartCluster: {Name:newest-cni-342058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:newest-cni-342058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:59:05.695146 1045900 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:59:05.695191 1045900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:59:05.737365 1045900 cri.go:89] found id: ""
	I0729 14:59:05.737447 1045900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:59:05.749101 1045900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:59:05.758789 1045900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:59:05.771070 1045900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:59:05.771092 1045900 kubeadm.go:157] found existing configuration files:
	
	I0729 14:59:05.771147 1045900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:59:05.782302 1045900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:59:05.782376 1045900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:59:05.793258 1045900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:59:05.804007 1045900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:59:05.804066 1045900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:59:05.814882 1045900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:59:05.825066 1045900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:59:05.825125 1045900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:59:05.835716 1045900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:59:05.845512 1045900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:59:05.845597 1045900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:59:05.855608 1045900 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:59:05.973341 1045900 kubeadm.go:310] W0729 14:59:05.955678     846 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 14:59:05.974348 1045900 kubeadm.go:310] W0729 14:59:05.957122     846 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 14:59:06.079375 1045900 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:59:16.717177 1045900 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 14:59:16.717266 1045900 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:59:16.717378 1045900 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:59:16.717513 1045900 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:59:16.717646 1045900 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 14:59:16.717722 1045900 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:59:16.719245 1045900 out.go:204]   - Generating certificates and keys ...
	I0729 14:59:16.719330 1045900 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:59:16.719412 1045900 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:59:16.719495 1045900 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 14:59:16.719570 1045900 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 14:59:16.719645 1045900 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 14:59:16.719706 1045900 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 14:59:16.719780 1045900 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 14:59:16.719922 1045900 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-342058] and IPs [192.168.39.180 127.0.0.1 ::1]
	I0729 14:59:16.719990 1045900 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 14:59:16.720090 1045900 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-342058] and IPs [192.168.39.180 127.0.0.1 ::1]
	I0729 14:59:16.720143 1045900 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 14:59:16.720194 1045900 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 14:59:16.720232 1045900 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 14:59:16.720278 1045900 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:59:16.720320 1045900 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:59:16.720367 1045900 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 14:59:16.720428 1045900 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:59:16.720524 1045900 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:59:16.720577 1045900 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:59:16.720675 1045900 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:59:16.720752 1045900 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:59:16.722050 1045900 out.go:204]   - Booting up control plane ...
	I0729 14:59:16.722165 1045900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:59:16.722279 1045900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:59:16.722369 1045900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:59:16.722514 1045900 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:59:16.722644 1045900 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:59:16.722691 1045900 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:59:16.722814 1045900 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 14:59:16.722885 1045900 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 14:59:16.722946 1045900 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.736947ms
	I0729 14:59:16.723005 1045900 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 14:59:16.723098 1045900 kubeadm.go:310] [api-check] The API server is healthy after 5.501387124s
	I0729 14:59:16.723259 1045900 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 14:59:16.723390 1045900 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 14:59:16.723457 1045900 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 14:59:16.723643 1045900 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-342058 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 14:59:16.723691 1045900 kubeadm.go:310] [bootstrap-token] Using token: p9ppuq.bkjfhyee818evv69
	I0729 14:59:16.724925 1045900 out.go:204]   - Configuring RBAC rules ...
	I0729 14:59:16.725026 1045900 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 14:59:16.725096 1045900 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 14:59:16.725221 1045900 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 14:59:16.725327 1045900 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 14:59:16.725436 1045900 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 14:59:16.725526 1045900 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 14:59:16.725625 1045900 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 14:59:16.725662 1045900 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 14:59:16.725704 1045900 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 14:59:16.725710 1045900 kubeadm.go:310] 
	I0729 14:59:16.725777 1045900 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 14:59:16.725795 1045900 kubeadm.go:310] 
	I0729 14:59:16.725885 1045900 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 14:59:16.725893 1045900 kubeadm.go:310] 
	I0729 14:59:16.725913 1045900 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 14:59:16.725977 1045900 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 14:59:16.726040 1045900 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 14:59:16.726049 1045900 kubeadm.go:310] 
	I0729 14:59:16.726113 1045900 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 14:59:16.726123 1045900 kubeadm.go:310] 
	I0729 14:59:16.726184 1045900 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 14:59:16.726194 1045900 kubeadm.go:310] 
	I0729 14:59:16.726253 1045900 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 14:59:16.726321 1045900 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 14:59:16.726387 1045900 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 14:59:16.726397 1045900 kubeadm.go:310] 
	I0729 14:59:16.726500 1045900 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 14:59:16.726589 1045900 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 14:59:16.726596 1045900 kubeadm.go:310] 
	I0729 14:59:16.726667 1045900 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token p9ppuq.bkjfhyee818evv69 \
	I0729 14:59:16.726757 1045900 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 \
	I0729 14:59:16.726776 1045900 kubeadm.go:310] 	--control-plane 
	I0729 14:59:16.726782 1045900 kubeadm.go:310] 
	I0729 14:59:16.726849 1045900 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 14:59:16.726855 1045900 kubeadm.go:310] 
	I0729 14:59:16.726931 1045900 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token p9ppuq.bkjfhyee818evv69 \
	I0729 14:59:16.727075 1045900 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 
	I0729 14:59:16.727090 1045900 cni.go:84] Creating CNI manager for ""
	I0729 14:59:16.727097 1045900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:59:16.728453 1045900 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:59:16.729621 1045900 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:59:16.741101 1045900 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:59:16.762892 1045900 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 14:59:16.763032 1045900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-342058 minikube.k8s.io/updated_at=2024_07_29T14_59_16_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411 minikube.k8s.io/name=newest-cni-342058 minikube.k8s.io/primary=true
	I0729 14:59:16.763055 1045900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:59:16.991122 1045900 ops.go:34] apiserver oom_adj: -16
	I0729 14:59:16.991130 1045900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:59:17.491569 1045900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:59:17.991390 1045900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:59:18.492162 1045900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:59:18.991170 1045900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:59:19.492052 1045900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:59:19.991466 1045900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:59:20.491298 1045900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:59:20.609045 1045900 kubeadm.go:1113] duration metric: took 3.846071014s to wait for elevateKubeSystemPrivileges
	I0729 14:59:20.610262 1045900 kubeadm.go:394] duration metric: took 14.915230113s to StartCluster
	I0729 14:59:20.610301 1045900 settings.go:142] acquiring lock: {Name:mke61e73d7bb1a5bd9c2f4c9e9bba0a07b199ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:59:20.610398 1045900 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:59:20.613427 1045900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:59:20.613745 1045900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 14:59:20.613770 1045900 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:59:20.613868 1045900 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 14:59:20.613954 1045900 config.go:182] Loaded profile config "newest-cni-342058": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 14:59:20.613962 1045900 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-342058"
	I0729 14:59:20.614010 1045900 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-342058"
	I0729 14:59:20.614015 1045900 addons.go:69] Setting default-storageclass=true in profile "newest-cni-342058"
	I0729 14:59:20.614063 1045900 host.go:66] Checking if "newest-cni-342058" exists ...
	I0729 14:59:20.614075 1045900 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-342058"
	I0729 14:59:20.614428 1045900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:59:20.614460 1045900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:59:20.614594 1045900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:59:20.614631 1045900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:59:20.615514 1045900 out.go:177] * Verifying Kubernetes components...
	I0729 14:59:20.617323 1045900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:59:20.631944 1045900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41039
	I0729 14:59:20.631956 1045900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38427
	I0729 14:59:20.632448 1045900 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:59:20.632571 1045900 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:59:20.633086 1045900 main.go:141] libmachine: Using API Version  1
	I0729 14:59:20.633107 1045900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:59:20.633249 1045900 main.go:141] libmachine: Using API Version  1
	I0729 14:59:20.633276 1045900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:59:20.633475 1045900 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:59:20.633605 1045900 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:59:20.633663 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetState
	I0729 14:59:20.634163 1045900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:59:20.634203 1045900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:59:20.637378 1045900 addons.go:234] Setting addon default-storageclass=true in "newest-cni-342058"
	I0729 14:59:20.637413 1045900 host.go:66] Checking if "newest-cni-342058" exists ...
	I0729 14:59:20.637719 1045900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:59:20.637761 1045900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:59:20.650992 1045900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44989
	I0729 14:59:20.651421 1045900 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:59:20.651990 1045900 main.go:141] libmachine: Using API Version  1
	I0729 14:59:20.652020 1045900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:59:20.652428 1045900 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:59:20.652680 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetState
	I0729 14:59:20.654053 1045900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45421
	I0729 14:59:20.654510 1045900 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:59:20.654555 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .DriverName
	I0729 14:59:20.654981 1045900 main.go:141] libmachine: Using API Version  1
	I0729 14:59:20.655005 1045900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:59:20.655343 1045900 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:59:20.655889 1045900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:59:20.655922 1045900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:59:20.656386 1045900 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:59:20.657759 1045900 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:59:20.657779 1045900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 14:59:20.657798 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHHostname
	I0729 14:59:20.660704 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:59:20.661202 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:3f:6b", ip: ""} in network mk-newest-cni-342058: {Iface:virbr3 ExpiryTime:2024-07-29 15:58:49 +0000 UTC Type:0 Mac:52:54:00:1c:3f:6b Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:newest-cni-342058 Clientid:01:52:54:00:1c:3f:6b}
	I0729 14:59:20.661251 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined IP address 192.168.39.180 and MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:59:20.661520 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHPort
	I0729 14:59:20.661691 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHKeyPath
	I0729 14:59:20.661865 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHUsername
	I0729 14:59:20.662003 1045900 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/newest-cni-342058/id_rsa Username:docker}
	I0729 14:59:20.672001 1045900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41027
	I0729 14:59:20.672399 1045900 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:59:20.672876 1045900 main.go:141] libmachine: Using API Version  1
	I0729 14:59:20.672897 1045900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:59:20.673188 1045900 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:59:20.673362 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetState
	I0729 14:59:20.674882 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .DriverName
	I0729 14:59:20.675065 1045900 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 14:59:20.675084 1045900 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 14:59:20.675098 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHHostname
	I0729 14:59:20.677631 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:59:20.678074 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:3f:6b", ip: ""} in network mk-newest-cni-342058: {Iface:virbr3 ExpiryTime:2024-07-29 15:58:49 +0000 UTC Type:0 Mac:52:54:00:1c:3f:6b Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:newest-cni-342058 Clientid:01:52:54:00:1c:3f:6b}
	I0729 14:59:20.678134 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | domain newest-cni-342058 has defined IP address 192.168.39.180 and MAC address 52:54:00:1c:3f:6b in network mk-newest-cni-342058
	I0729 14:59:20.678251 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHPort
	I0729 14:59:20.678429 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHKeyPath
	I0729 14:59:20.678540 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .GetSSHUsername
	I0729 14:59:20.678657 1045900 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/newest-cni-342058/id_rsa Username:docker}
	I0729 14:59:20.865703 1045900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 14:59:20.865706 1045900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:59:21.043310 1045900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 14:59:21.072693 1045900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:59:21.499247 1045900 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 14:59:21.499374 1045900 main.go:141] libmachine: Making call to close driver server
	I0729 14:59:21.499410 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .Close
	I0729 14:59:21.499738 1045900 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:59:21.499760 1045900 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:59:21.499770 1045900 main.go:141] libmachine: Making call to close driver server
	I0729 14:59:21.499779 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .Close
	I0729 14:59:21.499790 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | Closing plugin on server side
	I0729 14:59:21.501043 1045900 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:59:21.501096 1045900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:59:21.501380 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | Closing plugin on server side
	I0729 14:59:21.501424 1045900 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:59:21.501445 1045900 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:59:21.519378 1045900 main.go:141] libmachine: Making call to close driver server
	I0729 14:59:21.519407 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .Close
	I0729 14:59:21.519683 1045900 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:59:21.519707 1045900 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:59:21.965020 1045900 main.go:141] libmachine: Making call to close driver server
	I0729 14:59:21.965053 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .Close
	I0729 14:59:21.965105 1045900 api_server.go:72] duration metric: took 1.351287562s to wait for apiserver process to appear ...
	I0729 14:59:21.965393 1045900 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:59:21.965423 1045900 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8443/healthz ...
	I0729 14:59:21.965472 1045900 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:59:21.965491 1045900 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:59:21.965500 1045900 main.go:141] libmachine: Making call to close driver server
	I0729 14:59:21.965508 1045900 main.go:141] libmachine: (newest-cni-342058) Calling .Close
	I0729 14:59:21.965422 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | Closing plugin on server side
	I0729 14:59:21.965979 1045900 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:59:21.965997 1045900 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:59:21.965982 1045900 main.go:141] libmachine: (newest-cni-342058) DBG | Closing plugin on server side
	I0729 14:59:21.967850 1045900 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0729 14:59:21.969039 1045900 addons.go:510] duration metric: took 1.355171483s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0729 14:59:21.975627 1045900 api_server.go:279] https://192.168.39.180:8443/healthz returned 200:
	ok
	I0729 14:59:21.976551 1045900 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 14:59:21.976591 1045900 api_server.go:131] duration metric: took 11.187515ms to wait for apiserver health ...
	I0729 14:59:21.976624 1045900 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:59:21.993894 1045900 system_pods.go:59] 7 kube-system pods found
	I0729 14:59:21.993935 1045900 system_pods.go:61] "coredns-5cfdc65f69-5swmx" [84802dfb-0602-4e89-9118-295ad83e0484] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 14:59:21.993946 1045900 system_pods.go:61] "etcd-newest-cni-342058" [b46a5b73-19c5-4037-a208-138a086b1525] Running
	I0729 14:59:21.993957 1045900 system_pods.go:61] "kube-apiserver-newest-cni-342058" [966a6782-a02c-4b41-8bb9-ff20557fe660] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 14:59:21.993970 1045900 system_pods.go:61] "kube-controller-manager-newest-cni-342058" [085972a1-3f12-4ff1-8998-56fb55cc304c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 14:59:21.993981 1045900 system_pods.go:61] "kube-proxy-x4xvj" [d0d88b46-a4f3-4d41-b7c6-300fadbc6652] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 14:59:21.993991 1045900 system_pods.go:61] "kube-scheduler-newest-cni-342058" [31d85010-477f-44a2-b68f-6ae9a66fa4a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 14:59:21.993997 1045900 system_pods.go:61] "storage-provisioner" [1761f076-9c83-4398-9c3a-ccba31290b91] Pending
	I0729 14:59:21.994003 1045900 system_pods.go:74] duration metric: took 17.368882ms to wait for pod list to return data ...
	I0729 14:59:21.994013 1045900 default_sa.go:34] waiting for default service account to be created ...
	I0729 14:59:22.019267 1045900 default_sa.go:45] found service account: "default"
	I0729 14:59:22.019295 1045900 default_sa.go:55] duration metric: took 25.272411ms for default service account to be created ...
	I0729 14:59:22.019306 1045900 kubeadm.go:582] duration metric: took 1.405494897s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 14:59:22.019323 1045900 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:59:22.020565 1045900 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-342058" context rescaled to 1 replicas
	I0729 14:59:22.037459 1045900 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:59:22.037523 1045900 node_conditions.go:123] node cpu capacity is 2
	I0729 14:59:22.037538 1045900 node_conditions.go:105] duration metric: took 18.210215ms to run NodePressure ...
	I0729 14:59:22.037553 1045900 start.go:241] waiting for startup goroutines ...
	I0729 14:59:22.037563 1045900 start.go:246] waiting for cluster config update ...
	I0729 14:59:22.037577 1045900 start.go:255] writing updated cluster config ...
	I0729 14:59:22.037900 1045900 ssh_runner.go:195] Run: rm -f paused
	I0729 14:59:22.104665 1045900 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 14:59:22.106210 1045900 out.go:177] * Done! kubectl is now configured to use "newest-cni-342058" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 14:59:31 embed-certs-668123 crio[724]: time="2024-07-29 14:59:31.722712437Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722265171722689038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3ceb55ae-23bd-42d4-a129-7a80ecb49686 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:59:31 embed-certs-668123 crio[724]: time="2024-07-29 14:59:31.723242078Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b1c640c9-6c8b-42d7-a462-e4b82e6ff4c1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:59:31 embed-certs-668123 crio[724]: time="2024-07-29 14:59:31.723298342Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b1c640c9-6c8b-42d7-a462-e4b82e6ff4c1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:59:31 embed-certs-668123 crio[724]: time="2024-07-29 14:59:31.723857647Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a,PodSandboxId:ad2e8069a67618ba222da7b67134a2bdeed1234728f7ebf4c667e210942c1051,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722263917439411605,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecdab0df-406c-4f3c-b8fe-34a48b7f1e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 1dd23d1b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9730fdd74b65ac1bbf5e6b9ae441a2bbb0987220523c3c0196be708c4c0c3da9,PodSandboxId:d1bc6f5643615099eccc7360dfe694a10993162317d7374663b36b773c470a72,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722263895469474230,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: edbc7100-5ac6-4390-98cf-b25430811079,},Annotations:map[string]string{io.kubernetes.container.hash: 33062a48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d,PodSandboxId:48c4f3ee73cb82b7b98baafb48f72768686b27f2ca95c11caadbce4fc9168003,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722263894397531319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6dhzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c680e565-fe93-4072-8fe8-6fd440ae5675,},Annotations:map[string]string{io.kubernetes.container.hash: cf53aa0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b,PodSandboxId:fd6724a9aa4c34cf84b6252c053b4d91e97d7f6252a5c136f5353c6e4a84a751,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722263886696069519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2v79q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e43e850d-b94e-467c-b
f0f-0eac3828f54f,},Annotations:map[string]string{io.kubernetes.container.hash: 6a843a38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4,PodSandboxId:ad2e8069a67618ba222da7b67134a2bdeed1234728f7ebf4c667e210942c1051,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722263886661941181,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecdab0df-406c-4f3c-b8fe-34a48b7f1
e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 1dd23d1b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1,PodSandboxId:313ef259dc43d4445991f332076e128a7f0f959c520f250c1ffae2e6d8ebef3c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722263882191517261,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f505464a2b90053edb5a2e8c39af5afc,},Annotations:map[string]string{io.kub
ernetes.container.hash: 2ba1fd2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40,PodSandboxId:25492576dabe31fcb69f3da27658eb441756406366bbe433d4cd5f58dae3e1cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722263882204157437,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9adec07bd4fd19fe84094f223023c77,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8,PodSandboxId:93123498f71110244da3766451ff829fd57f09c4065fa24297b86581ab282783,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722263882187083721,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ae9160831d31b493d52099610c42660,},Annotations:map[string]string{io.kubernetes.container.hash:
49aa513b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322,PodSandboxId:fc46c20762ab46eaade5b501f697d4354d898f4c2a70d5720f8581c13496c0a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722263882181524173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832cf326908a8d72f3f9a2e90540c6ae,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b1c640c9-6c8b-42d7-a462-e4b82e6ff4c1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:59:31 embed-certs-668123 crio[724]: time="2024-07-29 14:59:31.760072060Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d6576c54-7ac4-46c1-835d-4fdba36ccd06 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:59:31 embed-certs-668123 crio[724]: time="2024-07-29 14:59:31.760143067Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d6576c54-7ac4-46c1-835d-4fdba36ccd06 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:59:31 embed-certs-668123 crio[724]: time="2024-07-29 14:59:31.761141811Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7806f45d-b5c8-4f40-9dfa-4a6c809d40f8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:59:31 embed-certs-668123 crio[724]: time="2024-07-29 14:59:31.761501262Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722265171761483402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7806f45d-b5c8-4f40-9dfa-4a6c809d40f8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:59:31 embed-certs-668123 crio[724]: time="2024-07-29 14:59:31.762029557Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8942dd4b-bf24-4253-8294-19e7707fe941 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:59:31 embed-certs-668123 crio[724]: time="2024-07-29 14:59:31.762080450Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8942dd4b-bf24-4253-8294-19e7707fe941 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:59:31 embed-certs-668123 crio[724]: time="2024-07-29 14:59:31.762259799Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a,PodSandboxId:ad2e8069a67618ba222da7b67134a2bdeed1234728f7ebf4c667e210942c1051,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722263917439411605,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecdab0df-406c-4f3c-b8fe-34a48b7f1e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 1dd23d1b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9730fdd74b65ac1bbf5e6b9ae441a2bbb0987220523c3c0196be708c4c0c3da9,PodSandboxId:d1bc6f5643615099eccc7360dfe694a10993162317d7374663b36b773c470a72,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722263895469474230,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: edbc7100-5ac6-4390-98cf-b25430811079,},Annotations:map[string]string{io.kubernetes.container.hash: 33062a48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d,PodSandboxId:48c4f3ee73cb82b7b98baafb48f72768686b27f2ca95c11caadbce4fc9168003,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722263894397531319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6dhzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c680e565-fe93-4072-8fe8-6fd440ae5675,},Annotations:map[string]string{io.kubernetes.container.hash: cf53aa0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b,PodSandboxId:fd6724a9aa4c34cf84b6252c053b4d91e97d7f6252a5c136f5353c6e4a84a751,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722263886696069519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2v79q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e43e850d-b94e-467c-b
f0f-0eac3828f54f,},Annotations:map[string]string{io.kubernetes.container.hash: 6a843a38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4,PodSandboxId:ad2e8069a67618ba222da7b67134a2bdeed1234728f7ebf4c667e210942c1051,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722263886661941181,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecdab0df-406c-4f3c-b8fe-34a48b7f1
e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 1dd23d1b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1,PodSandboxId:313ef259dc43d4445991f332076e128a7f0f959c520f250c1ffae2e6d8ebef3c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722263882191517261,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f505464a2b90053edb5a2e8c39af5afc,},Annotations:map[string]string{io.kub
ernetes.container.hash: 2ba1fd2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40,PodSandboxId:25492576dabe31fcb69f3da27658eb441756406366bbe433d4cd5f58dae3e1cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722263882204157437,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9adec07bd4fd19fe84094f223023c77,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8,PodSandboxId:93123498f71110244da3766451ff829fd57f09c4065fa24297b86581ab282783,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722263882187083721,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ae9160831d31b493d52099610c42660,},Annotations:map[string]string{io.kubernetes.container.hash:
49aa513b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322,PodSandboxId:fc46c20762ab46eaade5b501f697d4354d898f4c2a70d5720f8581c13496c0a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722263882181524173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832cf326908a8d72f3f9a2e90540c6ae,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8942dd4b-bf24-4253-8294-19e7707fe941 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:59:31 embed-certs-668123 crio[724]: time="2024-07-29 14:59:31.797832610Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c16ea7f-fc5a-4477-8fda-b7d1578f4f99 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:59:31 embed-certs-668123 crio[724]: time="2024-07-29 14:59:31.797901642Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c16ea7f-fc5a-4477-8fda-b7d1578f4f99 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:59:31 embed-certs-668123 crio[724]: time="2024-07-29 14:59:31.799087222Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0e5e0a11-e5fd-4bdc-a70e-3004a8810d83 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:59:31 embed-certs-668123 crio[724]: time="2024-07-29 14:59:31.800140593Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722265171800113726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e5e0a11-e5fd-4bdc-a70e-3004a8810d83 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:59:31 embed-certs-668123 crio[724]: time="2024-07-29 14:59:31.800673288Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=968d157c-5ae2-4034-8c93-968099959876 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:59:31 embed-certs-668123 crio[724]: time="2024-07-29 14:59:31.800791571Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=968d157c-5ae2-4034-8c93-968099959876 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:59:31 embed-certs-668123 crio[724]: time="2024-07-29 14:59:31.800986192Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a,PodSandboxId:ad2e8069a67618ba222da7b67134a2bdeed1234728f7ebf4c667e210942c1051,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722263917439411605,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecdab0df-406c-4f3c-b8fe-34a48b7f1e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 1dd23d1b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9730fdd74b65ac1bbf5e6b9ae441a2bbb0987220523c3c0196be708c4c0c3da9,PodSandboxId:d1bc6f5643615099eccc7360dfe694a10993162317d7374663b36b773c470a72,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722263895469474230,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: edbc7100-5ac6-4390-98cf-b25430811079,},Annotations:map[string]string{io.kubernetes.container.hash: 33062a48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d,PodSandboxId:48c4f3ee73cb82b7b98baafb48f72768686b27f2ca95c11caadbce4fc9168003,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722263894397531319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6dhzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c680e565-fe93-4072-8fe8-6fd440ae5675,},Annotations:map[string]string{io.kubernetes.container.hash: cf53aa0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b,PodSandboxId:fd6724a9aa4c34cf84b6252c053b4d91e97d7f6252a5c136f5353c6e4a84a751,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722263886696069519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2v79q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e43e850d-b94e-467c-b
f0f-0eac3828f54f,},Annotations:map[string]string{io.kubernetes.container.hash: 6a843a38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4,PodSandboxId:ad2e8069a67618ba222da7b67134a2bdeed1234728f7ebf4c667e210942c1051,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722263886661941181,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecdab0df-406c-4f3c-b8fe-34a48b7f1
e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 1dd23d1b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1,PodSandboxId:313ef259dc43d4445991f332076e128a7f0f959c520f250c1ffae2e6d8ebef3c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722263882191517261,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f505464a2b90053edb5a2e8c39af5afc,},Annotations:map[string]string{io.kub
ernetes.container.hash: 2ba1fd2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40,PodSandboxId:25492576dabe31fcb69f3da27658eb441756406366bbe433d4cd5f58dae3e1cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722263882204157437,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9adec07bd4fd19fe84094f223023c77,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8,PodSandboxId:93123498f71110244da3766451ff829fd57f09c4065fa24297b86581ab282783,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722263882187083721,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ae9160831d31b493d52099610c42660,},Annotations:map[string]string{io.kubernetes.container.hash:
49aa513b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322,PodSandboxId:fc46c20762ab46eaade5b501f697d4354d898f4c2a70d5720f8581c13496c0a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722263882181524173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832cf326908a8d72f3f9a2e90540c6ae,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=968d157c-5ae2-4034-8c93-968099959876 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:59:31 embed-certs-668123 crio[724]: time="2024-07-29 14:59:31.838865930Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0eefb144-760e-4f1f-ba6d-b28fc82a01a7 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:59:31 embed-certs-668123 crio[724]: time="2024-07-29 14:59:31.838936148Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0eefb144-760e-4f1f-ba6d-b28fc82a01a7 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:59:31 embed-certs-668123 crio[724]: time="2024-07-29 14:59:31.840307403Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9a86b5be-1e80-4e04-952b-b0fe6a44dfc1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:59:31 embed-certs-668123 crio[724]: time="2024-07-29 14:59:31.840688100Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722265171840666945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9a86b5be-1e80-4e04-952b-b0fe6a44dfc1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:59:31 embed-certs-668123 crio[724]: time="2024-07-29 14:59:31.841311262Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7190707d-644a-45de-a1c8-85a5d70f72a9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:59:31 embed-certs-668123 crio[724]: time="2024-07-29 14:59:31.841362971Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7190707d-644a-45de-a1c8-85a5d70f72a9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:59:31 embed-certs-668123 crio[724]: time="2024-07-29 14:59:31.841558813Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a,PodSandboxId:ad2e8069a67618ba222da7b67134a2bdeed1234728f7ebf4c667e210942c1051,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722263917439411605,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecdab0df-406c-4f3c-b8fe-34a48b7f1e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 1dd23d1b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9730fdd74b65ac1bbf5e6b9ae441a2bbb0987220523c3c0196be708c4c0c3da9,PodSandboxId:d1bc6f5643615099eccc7360dfe694a10993162317d7374663b36b773c470a72,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722263895469474230,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: edbc7100-5ac6-4390-98cf-b25430811079,},Annotations:map[string]string{io.kubernetes.container.hash: 33062a48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d,PodSandboxId:48c4f3ee73cb82b7b98baafb48f72768686b27f2ca95c11caadbce4fc9168003,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722263894397531319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6dhzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c680e565-fe93-4072-8fe8-6fd440ae5675,},Annotations:map[string]string{io.kubernetes.container.hash: cf53aa0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b,PodSandboxId:fd6724a9aa4c34cf84b6252c053b4d91e97d7f6252a5c136f5353c6e4a84a751,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722263886696069519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2v79q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e43e850d-b94e-467c-b
f0f-0eac3828f54f,},Annotations:map[string]string{io.kubernetes.container.hash: 6a843a38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4,PodSandboxId:ad2e8069a67618ba222da7b67134a2bdeed1234728f7ebf4c667e210942c1051,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722263886661941181,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecdab0df-406c-4f3c-b8fe-34a48b7f1
e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 1dd23d1b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1,PodSandboxId:313ef259dc43d4445991f332076e128a7f0f959c520f250c1ffae2e6d8ebef3c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722263882191517261,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f505464a2b90053edb5a2e8c39af5afc,},Annotations:map[string]string{io.kub
ernetes.container.hash: 2ba1fd2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40,PodSandboxId:25492576dabe31fcb69f3da27658eb441756406366bbe433d4cd5f58dae3e1cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722263882204157437,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9adec07bd4fd19fe84094f223023c77,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8,PodSandboxId:93123498f71110244da3766451ff829fd57f09c4065fa24297b86581ab282783,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722263882187083721,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ae9160831d31b493d52099610c42660,},Annotations:map[string]string{io.kubernetes.container.hash:
49aa513b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322,PodSandboxId:fc46c20762ab46eaade5b501f697d4354d898f4c2a70d5720f8581c13496c0a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722263882181524173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-668123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832cf326908a8d72f3f9a2e90540c6ae,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7190707d-644a-45de-a1c8-85a5d70f72a9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bb9e633119b91       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   ad2e8069a6761       storage-provisioner
	9730fdd74b65a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   d1bc6f5643615       busybox
	cce96789d197c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      21 minutes ago      Running             coredns                   1                   48c4f3ee73cb8       coredns-7db6d8ff4d-6dhzz
	1a12022d9b8d8       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      21 minutes ago      Running             kube-proxy                1                   fd6724a9aa4c3       kube-proxy-2v79q
	40292615dffc7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   ad2e8069a6761       storage-provisioner
	ed34fb84b9098       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      21 minutes ago      Running             kube-scheduler            1                   25492576dabe3       kube-scheduler-embed-certs-668123
	759428588e36e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      21 minutes ago      Running             etcd                      1                   313ef259dc43d       etcd-embed-certs-668123
	0e342f5e4bb06       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      21 minutes ago      Running             kube-apiserver            1                   93123498f7111       kube-apiserver-embed-certs-668123
	d2573d61839fb       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      21 minutes ago      Running             kube-controller-manager   1                   fc46c20762ab4       kube-controller-manager-embed-certs-668123
	
	
	==> coredns [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40987 - 58230 "HINFO IN 8956371880180969171.6547811078675431536. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016375587s
	
	
	==> describe nodes <==
	Name:               embed-certs-668123
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-668123
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=embed-certs-668123
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T14_30_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 14:30:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-668123
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 14:59:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 14:59:00 +0000   Mon, 29 Jul 2024 14:30:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 14:59:00 +0000   Mon, 29 Jul 2024 14:30:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 14:59:00 +0000   Mon, 29 Jul 2024 14:30:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 14:59:00 +0000   Mon, 29 Jul 2024 14:38:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.53
	  Hostname:    embed-certs-668123
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 624dda01c3c740c99fa2a7a21b4ad9e8
	  System UUID:                624dda01-c3c7-40c9-9fa2-a7a21b4ad9e8
	  Boot ID:                    9a80de19-1697-4aaa-b7b0-a87331c1439a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-6dhzz                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-embed-certs-668123                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-embed-certs-668123             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-embed-certs-668123    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-2v79q                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-embed-certs-668123             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-569cc877fc-5msnp               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-668123 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-668123 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-668123 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node embed-certs-668123 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node embed-certs-668123 event: Registered Node embed-certs-668123 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node embed-certs-668123 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node embed-certs-668123 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node embed-certs-668123 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-668123 event: Registered Node embed-certs-668123 in Controller
	
	
	==> dmesg <==
	[Jul29 14:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050125] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039160] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.762991] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.506538] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.544117] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.778255] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.057161] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055886] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.221635] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.134279] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[  +0.313673] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[  +4.284325] systemd-fstab-generator[807]: Ignoring "noauto" option for root device
	[  +0.069578] kauditd_printk_skb: 130 callbacks suppressed
	[Jul29 14:38] systemd-fstab-generator[930]: Ignoring "noauto" option for root device
	[  +5.660972] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.323557] systemd-fstab-generator[1530]: Ignoring "noauto" option for root device
	[  +3.310534] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.155515] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1] <==
	{"level":"warn","ts":"2024-07-29T14:38:57.126501Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.283633ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1118"}
	{"level":"info","ts":"2024-07-29T14:38:57.126539Z","caller":"traceutil/trace.go:171","msg":"trace[43429701] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:628; }","duration":"127.348743ms","start":"2024-07-29T14:38:56.999183Z","end":"2024-07-29T14:38:57.126532Z","steps":["trace[43429701] 'agreement among raft nodes before linearized reading'  (duration: 127.206099ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T14:38:57.126893Z","caller":"traceutil/trace.go:171","msg":"trace[1282939821] transaction","detail":"{read_only:false; response_revision:627; number_of_response:1; }","duration":"320.758999ms","start":"2024-07-29T14:38:56.806119Z","end":"2024-07-29T14:38:57.126878Z","steps":["trace[1282939821] 'process raft request'  (duration: 191.696128ms)","trace[1282939821] 'compare'  (duration: 128.182348ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T14:38:57.126997Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T14:38:56.806103Z","time spent":"320.851063ms","remote":"127.0.0.1:59272","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":561,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-668123\" mod_revision:618 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-668123\" value_size:502 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-668123\" > >"}
	{"level":"info","ts":"2024-07-29T14:38:57.127156Z","caller":"traceutil/trace.go:171","msg":"trace[536823810] transaction","detail":"{read_only:false; response_revision:628; number_of_response:1; }","duration":"254.332486ms","start":"2024-07-29T14:38:56.872817Z","end":"2024-07-29T14:38:57.12715Z","steps":["trace[536823810] 'process raft request'  (duration: 253.489233ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T14:39:03.47216Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.663068ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-5msnp\" ","response":"range_response_count:1 size:4281"}
	{"level":"info","ts":"2024-07-29T14:39:03.472222Z","caller":"traceutil/trace.go:171","msg":"trace[1321925536] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-5msnp; range_end:; response_count:1; response_revision:635; }","duration":"190.760888ms","start":"2024-07-29T14:39:03.281446Z","end":"2024-07-29T14:39:03.472207Z","steps":["trace[1321925536] 'range keys from in-memory index tree'  (duration: 190.558822ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T14:48:04.333937Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":840}
	{"level":"info","ts":"2024-07-29T14:48:04.343874Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":840,"took":"9.561259ms","hash":671835366,"current-db-size-bytes":2613248,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2613248,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-07-29T14:48:04.343955Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":671835366,"revision":840,"compact-revision":-1}
	{"level":"info","ts":"2024-07-29T14:53:04.341499Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1083}
	{"level":"info","ts":"2024-07-29T14:53:04.345396Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1083,"took":"3.340009ms","hash":280216706,"current-db-size-bytes":2613248,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1671168,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-07-29T14:53:04.345477Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":280216706,"revision":1083,"compact-revision":840}
	{"level":"info","ts":"2024-07-29T14:58:04.349815Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1326}
	{"level":"info","ts":"2024-07-29T14:58:04.353717Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1326,"took":"3.588564ms","hash":1488748856,"current-db-size-bytes":2613248,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1642496,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-29T14:58:04.353812Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1488748856,"revision":1326,"compact-revision":1083}
	{"level":"warn","ts":"2024-07-29T14:59:07.168949Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.436638ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5152277398289115208 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.53\" mod_revision:1612 > success:<request_put:<key:\"/registry/masterleases/192.168.50.53\" value_size:66 lease:5152277398289115206 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.53\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-29T14:59:07.169419Z","caller":"traceutil/trace.go:171","msg":"trace[1030985046] transaction","detail":"{read_only:false; response_revision:1621; number_of_response:1; }","duration":"251.129812ms","start":"2024-07-29T14:59:06.918247Z","end":"2024-07-29T14:59:07.169377Z","steps":["trace[1030985046] 'process raft request'  (duration: 122.101702ms)","trace[1030985046] 'compare'  (duration: 128.131892ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T14:59:07.627363Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.784385ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5152277398289115213 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1620 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-29T14:59:07.627527Z","caller":"traceutil/trace.go:171","msg":"trace[495916986] linearizableReadLoop","detail":"{readStateIndex:1925; appliedIndex:1924; }","duration":"452.089339ms","start":"2024-07-29T14:59:07.175423Z","end":"2024-07-29T14:59:07.627513Z","steps":["trace[495916986] 'read index received'  (duration: 332.033692ms)","trace[495916986] 'applied index is now lower than readState.Index'  (duration: 120.054121ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T14:59:07.627623Z","caller":"traceutil/trace.go:171","msg":"trace[1183419493] transaction","detail":"{read_only:false; response_revision:1622; number_of_response:1; }","duration":"453.658849ms","start":"2024-07-29T14:59:07.173948Z","end":"2024-07-29T14:59:07.627607Z","steps":["trace[1183419493] 'process raft request'  (duration: 333.561519ms)","trace[1183419493] 'compare'  (duration: 119.637774ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T14:59:07.62784Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T14:59:07.173937Z","time spent":"453.741878ms","remote":"127.0.0.1:59202","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1620 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-07-29T14:59:07.628016Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"452.579616ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" ","response":"range_response_count:1 size:480"}
	{"level":"info","ts":"2024-07-29T14:59:07.628082Z","caller":"traceutil/trace.go:171","msg":"trace[1545377892] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:1622; }","duration":"452.664324ms","start":"2024-07-29T14:59:07.175406Z","end":"2024-07-29T14:59:07.628071Z","steps":["trace[1545377892] 'agreement among raft nodes before linearized reading'  (duration: 452.213526ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T14:59:07.628173Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T14:59:07.175399Z","time spent":"452.763812ms","remote":"127.0.0.1:59288","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":502,"request content":"key:\"/registry/endpointslices/default/kubernetes\" "}
	
	
	==> kernel <==
	 14:59:32 up 21 min,  0 users,  load average: 0.76, 0.34, 0.18
	Linux embed-certs-668123 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8] <==
	I0729 14:54:06.645314       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 14:56:06.643382       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 14:56:06.643688       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 14:56:06.643718       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 14:56:06.646106       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 14:56:06.646135       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 14:56:06.646144       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 14:58:05.647949       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 14:58:05.648301       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0729 14:58:06.648476       1 handler_proxy.go:93] no RequestInfo found in the context
	W0729 14:58:06.648486       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 14:58:06.648652       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 14:58:06.648661       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0729 14:58:06.648690       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 14:58:06.650800       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 14:59:06.649701       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 14:59:06.649840       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 14:59:06.649849       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 14:59:06.652003       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 14:59:06.652137       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 14:59:06.652166       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322] <==
	I0729 14:54:10.201491       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="448.215µs"
	E0729 14:54:18.697717       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:54:19.231819       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 14:54:21.201548       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="76.381µs"
	E0729 14:54:48.702526       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:54:49.239926       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:55:18.707583       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:55:19.247841       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:55:48.711812       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:55:49.256033       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:56:18.716804       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:56:19.264421       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:56:48.721410       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:56:49.271676       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:57:18.726500       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:57:19.279214       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:57:48.731400       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:57:49.286221       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:58:18.736484       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:58:19.293573       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:58:48.742592       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:58:49.301068       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:59:18.747843       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:59:19.308416       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 14:59:24.198394       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="284.192µs"
	
	
	==> kube-proxy [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b] <==
	I0729 14:38:06.910479       1 server_linux.go:69] "Using iptables proxy"
	I0729 14:38:06.920558       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.53"]
	I0729 14:38:06.957526       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 14:38:06.957609       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 14:38:06.957638       1 server_linux.go:165] "Using iptables Proxier"
	I0729 14:38:06.960330       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 14:38:06.960565       1 server.go:872] "Version info" version="v1.30.3"
	I0729 14:38:06.960609       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 14:38:06.962144       1 config.go:192] "Starting service config controller"
	I0729 14:38:06.962189       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 14:38:06.962228       1 config.go:101] "Starting endpoint slice config controller"
	I0729 14:38:06.962244       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 14:38:06.962572       1 config.go:319] "Starting node config controller"
	I0729 14:38:06.962608       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 14:38:07.063273       1 shared_informer.go:320] Caches are synced for node config
	I0729 14:38:07.063374       1 shared_informer.go:320] Caches are synced for service config
	I0729 14:38:07.063391       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40] <==
	I0729 14:38:03.303024       1 serving.go:380] Generated self-signed cert in-memory
	W0729 14:38:05.590466       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 14:38:05.590573       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 14:38:05.590600       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 14:38:05.590607       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 14:38:05.646707       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 14:38:05.651257       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 14:38:05.653101       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 14:38:05.653238       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 14:38:05.653266       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 14:38:05.653280       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 14:38:05.753479       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 14:57:01 embed-certs-668123 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 14:57:12 embed-certs-668123 kubelet[937]: E0729 14:57:12.184201     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5msnp" podUID="eb9cd6f7-caf5-4b18-b0d6-0f01add839ce"
	Jul 29 14:57:26 embed-certs-668123 kubelet[937]: E0729 14:57:26.183869     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5msnp" podUID="eb9cd6f7-caf5-4b18-b0d6-0f01add839ce"
	Jul 29 14:57:37 embed-certs-668123 kubelet[937]: E0729 14:57:37.184090     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5msnp" podUID="eb9cd6f7-caf5-4b18-b0d6-0f01add839ce"
	Jul 29 14:57:48 embed-certs-668123 kubelet[937]: E0729 14:57:48.183901     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5msnp" podUID="eb9cd6f7-caf5-4b18-b0d6-0f01add839ce"
	Jul 29 14:58:00 embed-certs-668123 kubelet[937]: E0729 14:58:00.184027     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5msnp" podUID="eb9cd6f7-caf5-4b18-b0d6-0f01add839ce"
	Jul 29 14:58:01 embed-certs-668123 kubelet[937]: E0729 14:58:01.217178     937 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 14:58:01 embed-certs-668123 kubelet[937]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 14:58:01 embed-certs-668123 kubelet[937]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 14:58:01 embed-certs-668123 kubelet[937]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 14:58:01 embed-certs-668123 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 14:58:15 embed-certs-668123 kubelet[937]: E0729 14:58:15.183119     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5msnp" podUID="eb9cd6f7-caf5-4b18-b0d6-0f01add839ce"
	Jul 29 14:58:29 embed-certs-668123 kubelet[937]: E0729 14:58:29.184451     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5msnp" podUID="eb9cd6f7-caf5-4b18-b0d6-0f01add839ce"
	Jul 29 14:58:44 embed-certs-668123 kubelet[937]: E0729 14:58:44.183624     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5msnp" podUID="eb9cd6f7-caf5-4b18-b0d6-0f01add839ce"
	Jul 29 14:58:59 embed-certs-668123 kubelet[937]: E0729 14:58:59.184283     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5msnp" podUID="eb9cd6f7-caf5-4b18-b0d6-0f01add839ce"
	Jul 29 14:59:01 embed-certs-668123 kubelet[937]: E0729 14:59:01.219393     937 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 14:59:01 embed-certs-668123 kubelet[937]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 14:59:01 embed-certs-668123 kubelet[937]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 14:59:01 embed-certs-668123 kubelet[937]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 14:59:01 embed-certs-668123 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 14:59:11 embed-certs-668123 kubelet[937]: E0729 14:59:11.283470     937 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 29 14:59:11 embed-certs-668123 kubelet[937]: E0729 14:59:11.283566     937 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 29 14:59:11 embed-certs-668123 kubelet[937]: E0729 14:59:11.283870     937 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5kx9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,Recurs
iveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false
,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-5msnp_kube-system(eb9cd6f7-caf5-4b18-b0d6-0f01add839ce): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 29 14:59:11 embed-certs-668123 kubelet[937]: E0729 14:59:11.283918     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-5msnp" podUID="eb9cd6f7-caf5-4b18-b0d6-0f01add839ce"
	Jul 29 14:59:24 embed-certs-668123 kubelet[937]: E0729 14:59:24.183194     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5msnp" podUID="eb9cd6f7-caf5-4b18-b0d6-0f01add839ce"
	
	
	==> storage-provisioner [40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4] <==
	I0729 14:38:06.840937       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0729 14:38:36.845691       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a] <==
	I0729 14:38:37.575618       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 14:38:37.585149       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 14:38:37.585239       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 14:38:54.987972       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 14:38:54.988350       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-668123_2eb9da7e-9d3b-4756-9d53-8e848f523f15!
	I0729 14:38:54.989156       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4d40ab2b-13cf-41cb-bc8c-a2b36c4772e4", APIVersion:"v1", ResourceVersion:"624", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-668123_2eb9da7e-9d3b-4756-9d53-8e848f523f15 became leader
	I0729 14:38:55.089840       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-668123_2eb9da7e-9d3b-4756-9d53-8e848f523f15!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-668123 -n embed-certs-668123
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-668123 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-5msnp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-668123 describe pod metrics-server-569cc877fc-5msnp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-668123 describe pod metrics-server-569cc877fc-5msnp: exit status 1 (63.936167ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-5msnp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-668123 describe pod metrics-server-569cc877fc-5msnp: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (474.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (413.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-751306 -n default-k8s-diff-port-751306
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-29 14:59:35.0554804 +0000 UTC m=+6461.463204176
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-751306 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-751306 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.219µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-751306 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-751306 -n default-k8s-diff-port-751306
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-751306 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-751306 logs -n 25: (1.218657891s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:31 UTC |
	|         | default-k8s-diff-port-751306                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-603534             | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:30 UTC | 29 Jul 24 14:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-603534                                   | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-668123            | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC | 29 Jul 24 14:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-668123                                  | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-751306  | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC | 29 Jul 24 14:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC |                     |
	|         | default-k8s-diff-port-751306                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-603534                  | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-603534 --memory=2200                     | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:32 UTC | 29 Jul 24 14:44 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-360866        | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:33 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-668123                 | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-668123                                  | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:33 UTC | 29 Jul 24 14:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-751306       | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC | 29 Jul 24 14:43 UTC |
	|         | default-k8s-diff-port-751306                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-360866                              | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC | 29 Jul 24 14:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-360866             | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC | 29 Jul 24 14:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-360866                              | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-360866                              | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:58 UTC | 29 Jul 24 14:58 UTC |
	| start   | -p newest-cni-342058 --memory=2200 --alsologtostderr   | newest-cni-342058            | jenkins | v1.33.1 | 29 Jul 24 14:58 UTC | 29 Jul 24 14:59 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p no-preload-603534                                   | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:58 UTC | 29 Jul 24 14:58 UTC |
	| addons  | enable metrics-server -p newest-cni-342058             | newest-cni-342058            | jenkins | v1.33.1 | 29 Jul 24 14:59 UTC | 29 Jul 24 14:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-342058                                   | newest-cni-342058            | jenkins | v1.33.1 | 29 Jul 24 14:59 UTC | 29 Jul 24 14:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-668123                                  | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:59 UTC | 29 Jul 24 14:59 UTC |
	| addons  | enable dashboard -p newest-cni-342058                  | newest-cni-342058            | jenkins | v1.33.1 | 29 Jul 24 14:59 UTC | 29 Jul 24 14:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-342058 --memory=2200 --alsologtostderr   | newest-cni-342058            | jenkins | v1.33.1 | 29 Jul 24 14:59 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 14:59:34
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 14:59:34.936297 1046852 out.go:291] Setting OutFile to fd 1 ...
	I0729 14:59:34.936405 1046852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:59:34.936431 1046852 out.go:304] Setting ErrFile to fd 2...
	I0729 14:59:34.936442 1046852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:59:34.936638 1046852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 14:59:34.937383 1046852 out.go:298] Setting JSON to false
	I0729 14:59:34.938683 1046852 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":16927,"bootTime":1722248248,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 14:59:34.938743 1046852 start.go:139] virtualization: kvm guest
	I0729 14:59:34.940787 1046852 out.go:177] * [newest-cni-342058] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 14:59:34.942034 1046852 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 14:59:34.942082 1046852 notify.go:220] Checking for updates...
	I0729 14:59:34.944518 1046852 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 14:59:34.945911 1046852 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:59:34.947113 1046852 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:59:34.948339 1046852 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 14:59:34.949489 1046852 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 14:59:34.951114 1046852 config.go:182] Loaded profile config "newest-cni-342058": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 14:59:34.951727 1046852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:59:34.951806 1046852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:59:34.967107 1046852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36897
	I0729 14:59:34.967502 1046852 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:59:34.968093 1046852 main.go:141] libmachine: Using API Version  1
	I0729 14:59:34.968128 1046852 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:59:34.968493 1046852 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:59:34.968713 1046852 main.go:141] libmachine: (newest-cni-342058) Calling .DriverName
	I0729 14:59:34.968993 1046852 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 14:59:34.969265 1046852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:59:34.969304 1046852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:59:34.984250 1046852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0729 14:59:34.984638 1046852 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:59:34.985114 1046852 main.go:141] libmachine: Using API Version  1
	I0729 14:59:34.985137 1046852 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:59:34.985506 1046852 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:59:34.985719 1046852 main.go:141] libmachine: (newest-cni-342058) Calling .DriverName
	I0729 14:59:35.021719 1046852 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 14:59:35.022878 1046852 start.go:297] selected driver: kvm2
	I0729 14:59:35.022891 1046852 start.go:901] validating driver "kvm2" against &{Name:newest-cni-342058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0-beta.0 ClusterName:newest-cni-342058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system
_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:59:35.023004 1046852 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 14:59:35.023650 1046852 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:59:35.023732 1046852 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19338-974764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 14:59:35.038900 1046852 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 14:59:35.039433 1046852 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 14:59:35.039475 1046852 cni.go:84] Creating CNI manager for ""
	I0729 14:59:35.039485 1046852 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:59:35.039545 1046852 start.go:340] cluster config:
	{Name:newest-cni-342058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-342058 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAd
dress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:59:35.039710 1046852 iso.go:125] acquiring lock: {Name:mk2bc72146110e230952d77b90cad2ea8182c9d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:59:35.041381 1046852 out.go:177] * Starting "newest-cni-342058" primary control-plane node in "newest-cni-342058" cluster
	I0729 14:59:35.042640 1046852 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 14:59:35.042683 1046852 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 14:59:35.042728 1046852 cache.go:56] Caching tarball of preloaded images
	I0729 14:59:35.042823 1046852 preload.go:172] Found /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 14:59:35.042837 1046852 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0729 14:59:35.042941 1046852 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/config.json ...
	I0729 14:59:35.043119 1046852 start.go:360] acquireMachinesLock for newest-cni-342058: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 14:59:35.043171 1046852 start.go:364] duration metric: took 30.074µs to acquireMachinesLock for "newest-cni-342058"
	I0729 14:59:35.043190 1046852 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:59:35.043198 1046852 fix.go:54] fixHost starting: 
	I0729 14:59:35.043450 1046852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:59:35.043484 1046852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:59:35.060554 1046852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34937
	I0729 14:59:35.060974 1046852 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:59:35.061430 1046852 main.go:141] libmachine: Using API Version  1
	I0729 14:59:35.061475 1046852 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:59:35.061844 1046852 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:59:35.062055 1046852 main.go:141] libmachine: (newest-cni-342058) Calling .DriverName
	I0729 14:59:35.062220 1046852 main.go:141] libmachine: (newest-cni-342058) Calling .GetState
	I0729 14:59:35.063883 1046852 fix.go:112] recreateIfNeeded on newest-cni-342058: state=Stopped err=<nil>
	I0729 14:59:35.063924 1046852 main.go:141] libmachine: (newest-cni-342058) Calling .DriverName
	W0729 14:59:35.064112 1046852 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:59:35.066457 1046852 out.go:177] * Restarting existing kvm2 VM for "newest-cni-342058" ...
	
	
	==> CRI-O <==
	Jul 29 14:59:35 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:59:35.650174586Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722265175650148232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b4dd297-2e3c-4299-b953-daeb23f0ac2e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:59:35 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:59:35.650639279Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=10c727c1-21ad-4883-a483-0d69e16c0273 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:59:35 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:59:35.650687521Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=10c727c1-21ad-4883-a483-0d69e16c0273 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:59:35 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:59:35.651238760Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdd9ff82307b099f776bfa4dac869b5b9dacc92f558831d274af5627900dbce1,PodSandboxId:bde11caeb61a241211e7250335ce11ba8f256ef25d308e4728252384cc6b8405,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722264216772966033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8bf282a-27e8-43f9-a2ac-af6000a4decc,},Annotations:map[string]string{io.kubernetes.container.hash: 563fca7f,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a8915e6c345a50c54e0c4be67595fa294182ca8b3760d4d705b094cfba1128,PodSandboxId:2c44d4c4d31e4885b86d2882ca9c0804d7ba8f33ea1f2e8507d279cdcac60e1b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264216431137276,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxmwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b78c9b-97dc-4313-92d1-76fab481b276,},Annotations:map[string]string{io.kubernetes.container.hash: 6f594bdd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c67adc88ce9352e7b92fbb3c3fe544031541740cc373676086afdc52e3fad7a,PodSandboxId:41e6b7d9f82ab11b5fc1f2b8480adb855cc1f9d414597594e86cd8c8fccbb5f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264216357683411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7qhqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 88941d43-c67d-4190-896c-edfc4c96b9a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6932c87,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a900fda5b73988982b89265c41acdb4ff78f9df6d010ba1115310a197a72bb49,PodSandboxId:2d44b9233bf1361e08ff867e05a0664ea54aecf54a2689c95912016cade1684f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,
CreatedAt:1722264215064916832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tqtjx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd100e13-d714-4ddb-ba43-44be43035b3f,},Annotations:map[string]string{io.kubernetes.container.hash: ff69ba70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374debdcd43ec5f9922cc1fc8045ba8ec8b2735c815c5287104bb35e338b6203,PodSandboxId:626217ee90597f16d8cab9f78137260e2fa38b9a2cf5a2755c797cc5db544bd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722264195674414791,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb851f12643318c164e97cb88a8f291b,},Annotations:map[string]string{io.kubernetes.container.hash: f375b443,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d29c63a72f53bcd28b18db96b1bda62d0f2eb3f54660499b2aeb5a74c1573a66,PodSandboxId:1f4c7d480f8519a8ca59cbcfe91b153ae0fcdb08d1da1269cae1621e74489600,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722264195674573378,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c77500357cb96f62f4b1d5e33dd3b234,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8e9a6a0684ae10c3a5ab4d6e2cea31a31b8de3dfb544135fc3b57109dfc74c4,PodSandboxId:2bd103026510ce3c2ef7366871f3a71055e2b8fab7052bad30484dd82883a127,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722264195651073789,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a705fe5bc92d59a4f4ff0e77713908eb,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a77e374a9600351c778ccb777afea2c840723fe748ad9ec3447c51c216d0e5,PodSandboxId:61aa2726ce2814ab416eccb78aa526a00aa7545bd399784ed431c00498b871fe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722264195603680804,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0576d2cd711613b3730e4289c9117d50,},Annotations:map[string]string{io.kubernetes.container.hash: cec89149,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=10c727c1-21ad-4883-a483-0d69e16c0273 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:59:35 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:59:35.697237195Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=21041ab1-eec4-4ce5-a8c9-74d4ec18d523 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:59:35 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:59:35.697383269Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=21041ab1-eec4-4ce5-a8c9-74d4ec18d523 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:59:35 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:59:35.698837370Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0dad0266-b681-4a4e-9534-fb64ffa25225 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:59:35 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:59:35.699563189Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722265175699532402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0dad0266-b681-4a4e-9534-fb64ffa25225 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:59:35 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:59:35.700511426Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0a0e0ad-3591-4e9c-921d-fe6f1a3a4d5a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:59:35 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:59:35.700590475Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0a0e0ad-3591-4e9c-921d-fe6f1a3a4d5a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:59:35 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:59:35.704616852Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdd9ff82307b099f776bfa4dac869b5b9dacc92f558831d274af5627900dbce1,PodSandboxId:bde11caeb61a241211e7250335ce11ba8f256ef25d308e4728252384cc6b8405,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722264216772966033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8bf282a-27e8-43f9-a2ac-af6000a4decc,},Annotations:map[string]string{io.kubernetes.container.hash: 563fca7f,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a8915e6c345a50c54e0c4be67595fa294182ca8b3760d4d705b094cfba1128,PodSandboxId:2c44d4c4d31e4885b86d2882ca9c0804d7ba8f33ea1f2e8507d279cdcac60e1b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264216431137276,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxmwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b78c9b-97dc-4313-92d1-76fab481b276,},Annotations:map[string]string{io.kubernetes.container.hash: 6f594bdd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c67adc88ce9352e7b92fbb3c3fe544031541740cc373676086afdc52e3fad7a,PodSandboxId:41e6b7d9f82ab11b5fc1f2b8480adb855cc1f9d414597594e86cd8c8fccbb5f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264216357683411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7qhqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 88941d43-c67d-4190-896c-edfc4c96b9a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6932c87,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a900fda5b73988982b89265c41acdb4ff78f9df6d010ba1115310a197a72bb49,PodSandboxId:2d44b9233bf1361e08ff867e05a0664ea54aecf54a2689c95912016cade1684f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,
CreatedAt:1722264215064916832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tqtjx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd100e13-d714-4ddb-ba43-44be43035b3f,},Annotations:map[string]string{io.kubernetes.container.hash: ff69ba70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374debdcd43ec5f9922cc1fc8045ba8ec8b2735c815c5287104bb35e338b6203,PodSandboxId:626217ee90597f16d8cab9f78137260e2fa38b9a2cf5a2755c797cc5db544bd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722264195674414791,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb851f12643318c164e97cb88a8f291b,},Annotations:map[string]string{io.kubernetes.container.hash: f375b443,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d29c63a72f53bcd28b18db96b1bda62d0f2eb3f54660499b2aeb5a74c1573a66,PodSandboxId:1f4c7d480f8519a8ca59cbcfe91b153ae0fcdb08d1da1269cae1621e74489600,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722264195674573378,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c77500357cb96f62f4b1d5e33dd3b234,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8e9a6a0684ae10c3a5ab4d6e2cea31a31b8de3dfb544135fc3b57109dfc74c4,PodSandboxId:2bd103026510ce3c2ef7366871f3a71055e2b8fab7052bad30484dd82883a127,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722264195651073789,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a705fe5bc92d59a4f4ff0e77713908eb,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a77e374a9600351c778ccb777afea2c840723fe748ad9ec3447c51c216d0e5,PodSandboxId:61aa2726ce2814ab416eccb78aa526a00aa7545bd399784ed431c00498b871fe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722264195603680804,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0576d2cd711613b3730e4289c9117d50,},Annotations:map[string]string{io.kubernetes.container.hash: cec89149,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0a0e0ad-3591-4e9c-921d-fe6f1a3a4d5a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:59:35 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:59:35.748567152Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b7adbe3-3023-4174-8af5-8937b41df9ed name=/runtime.v1.RuntimeService/Version
	Jul 29 14:59:35 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:59:35.748639835Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b7adbe3-3023-4174-8af5-8937b41df9ed name=/runtime.v1.RuntimeService/Version
	Jul 29 14:59:35 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:59:35.749724963Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a6aaca50-f23a-4ca5-b49e-592269786fdd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:59:35 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:59:35.750549442Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722265175750525794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a6aaca50-f23a-4ca5-b49e-592269786fdd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:59:35 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:59:35.751308258Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=606d2674-39e1-42f1-83e5-2f0f701b8cc3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:59:35 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:59:35.751375165Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=606d2674-39e1-42f1-83e5-2f0f701b8cc3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:59:35 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:59:35.751555346Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdd9ff82307b099f776bfa4dac869b5b9dacc92f558831d274af5627900dbce1,PodSandboxId:bde11caeb61a241211e7250335ce11ba8f256ef25d308e4728252384cc6b8405,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722264216772966033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8bf282a-27e8-43f9-a2ac-af6000a4decc,},Annotations:map[string]string{io.kubernetes.container.hash: 563fca7f,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a8915e6c345a50c54e0c4be67595fa294182ca8b3760d4d705b094cfba1128,PodSandboxId:2c44d4c4d31e4885b86d2882ca9c0804d7ba8f33ea1f2e8507d279cdcac60e1b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264216431137276,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxmwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b78c9b-97dc-4313-92d1-76fab481b276,},Annotations:map[string]string{io.kubernetes.container.hash: 6f594bdd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c67adc88ce9352e7b92fbb3c3fe544031541740cc373676086afdc52e3fad7a,PodSandboxId:41e6b7d9f82ab11b5fc1f2b8480adb855cc1f9d414597594e86cd8c8fccbb5f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264216357683411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7qhqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 88941d43-c67d-4190-896c-edfc4c96b9a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6932c87,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a900fda5b73988982b89265c41acdb4ff78f9df6d010ba1115310a197a72bb49,PodSandboxId:2d44b9233bf1361e08ff867e05a0664ea54aecf54a2689c95912016cade1684f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,
CreatedAt:1722264215064916832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tqtjx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd100e13-d714-4ddb-ba43-44be43035b3f,},Annotations:map[string]string{io.kubernetes.container.hash: ff69ba70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374debdcd43ec5f9922cc1fc8045ba8ec8b2735c815c5287104bb35e338b6203,PodSandboxId:626217ee90597f16d8cab9f78137260e2fa38b9a2cf5a2755c797cc5db544bd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722264195674414791,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb851f12643318c164e97cb88a8f291b,},Annotations:map[string]string{io.kubernetes.container.hash: f375b443,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d29c63a72f53bcd28b18db96b1bda62d0f2eb3f54660499b2aeb5a74c1573a66,PodSandboxId:1f4c7d480f8519a8ca59cbcfe91b153ae0fcdb08d1da1269cae1621e74489600,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722264195674573378,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c77500357cb96f62f4b1d5e33dd3b234,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8e9a6a0684ae10c3a5ab4d6e2cea31a31b8de3dfb544135fc3b57109dfc74c4,PodSandboxId:2bd103026510ce3c2ef7366871f3a71055e2b8fab7052bad30484dd82883a127,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722264195651073789,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a705fe5bc92d59a4f4ff0e77713908eb,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a77e374a9600351c778ccb777afea2c840723fe748ad9ec3447c51c216d0e5,PodSandboxId:61aa2726ce2814ab416eccb78aa526a00aa7545bd399784ed431c00498b871fe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722264195603680804,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0576d2cd711613b3730e4289c9117d50,},Annotations:map[string]string{io.kubernetes.container.hash: cec89149,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=606d2674-39e1-42f1-83e5-2f0f701b8cc3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:59:35 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:59:35.786433557Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=276f7916-7e0a-4b01-8659-90428afb7f6b name=/runtime.v1.RuntimeService/Version
	Jul 29 14:59:35 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:59:35.786530374Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=276f7916-7e0a-4b01-8659-90428afb7f6b name=/runtime.v1.RuntimeService/Version
	Jul 29 14:59:35 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:59:35.787935155Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b36ea47d-4431-4517-acfc-4d02347bc8b9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:59:35 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:59:35.788512756Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722265175788486692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b36ea47d-4431-4517-acfc-4d02347bc8b9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:59:35 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:59:35.789086330Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f96ceaf-6bf6-4b1d-a8f2-dff840fb4003 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:59:35 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:59:35.789158801Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f96ceaf-6bf6-4b1d-a8f2-dff840fb4003 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:59:35 default-k8s-diff-port-751306 crio[723]: time="2024-07-29 14:59:35.789401254Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdd9ff82307b099f776bfa4dac869b5b9dacc92f558831d274af5627900dbce1,PodSandboxId:bde11caeb61a241211e7250335ce11ba8f256ef25d308e4728252384cc6b8405,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722264216772966033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8bf282a-27e8-43f9-a2ac-af6000a4decc,},Annotations:map[string]string{io.kubernetes.container.hash: 563fca7f,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a8915e6c345a50c54e0c4be67595fa294182ca8b3760d4d705b094cfba1128,PodSandboxId:2c44d4c4d31e4885b86d2882ca9c0804d7ba8f33ea1f2e8507d279cdcac60e1b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264216431137276,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxmwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b78c9b-97dc-4313-92d1-76fab481b276,},Annotations:map[string]string{io.kubernetes.container.hash: 6f594bdd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c67adc88ce9352e7b92fbb3c3fe544031541740cc373676086afdc52e3fad7a,PodSandboxId:41e6b7d9f82ab11b5fc1f2b8480adb855cc1f9d414597594e86cd8c8fccbb5f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264216357683411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7qhqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 88941d43-c67d-4190-896c-edfc4c96b9a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6932c87,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a900fda5b73988982b89265c41acdb4ff78f9df6d010ba1115310a197a72bb49,PodSandboxId:2d44b9233bf1361e08ff867e05a0664ea54aecf54a2689c95912016cade1684f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,
CreatedAt:1722264215064916832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tqtjx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd100e13-d714-4ddb-ba43-44be43035b3f,},Annotations:map[string]string{io.kubernetes.container.hash: ff69ba70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374debdcd43ec5f9922cc1fc8045ba8ec8b2735c815c5287104bb35e338b6203,PodSandboxId:626217ee90597f16d8cab9f78137260e2fa38b9a2cf5a2755c797cc5db544bd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722264195674414791,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb851f12643318c164e97cb88a8f291b,},Annotations:map[string]string{io.kubernetes.container.hash: f375b443,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d29c63a72f53bcd28b18db96b1bda62d0f2eb3f54660499b2aeb5a74c1573a66,PodSandboxId:1f4c7d480f8519a8ca59cbcfe91b153ae0fcdb08d1da1269cae1621e74489600,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722264195674573378,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c77500357cb96f62f4b1d5e33dd3b234,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8e9a6a0684ae10c3a5ab4d6e2cea31a31b8de3dfb544135fc3b57109dfc74c4,PodSandboxId:2bd103026510ce3c2ef7366871f3a71055e2b8fab7052bad30484dd82883a127,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722264195651073789,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a705fe5bc92d59a4f4ff0e77713908eb,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a77e374a9600351c778ccb777afea2c840723fe748ad9ec3447c51c216d0e5,PodSandboxId:61aa2726ce2814ab416eccb78aa526a00aa7545bd399784ed431c00498b871fe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722264195603680804,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-751306,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0576d2cd711613b3730e4289c9117d50,},Annotations:map[string]string{io.kubernetes.container.hash: cec89149,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f96ceaf-6bf6-4b1d-a8f2-dff840fb4003 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bdd9ff82307b0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   bde11caeb61a2       storage-provisioner
	a3a8915e6c345       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   2c44d4c4d31e4       coredns-7db6d8ff4d-zxmwx
	1c67adc88ce93       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   41e6b7d9f82ab       coredns-7db6d8ff4d-7qhqh
	a900fda5b7398       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   16 minutes ago      Running             kube-proxy                0                   2d44b9233bf13       kube-proxy-tqtjx
	d29c63a72f53b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   16 minutes ago      Running             kube-scheduler            2                   1f4c7d480f851       kube-scheduler-default-k8s-diff-port-751306
	374debdcd43ec       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   16 minutes ago      Running             etcd                      2                   626217ee90597       etcd-default-k8s-diff-port-751306
	f8e9a6a0684ae       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   16 minutes ago      Running             kube-controller-manager   2                   2bd103026510c       kube-controller-manager-default-k8s-diff-port-751306
	b6a77e374a960       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   16 minutes ago      Running             kube-apiserver            2                   61aa2726ce281       kube-apiserver-default-k8s-diff-port-751306
	
	
	==> coredns [1c67adc88ce9352e7b92fbb3c3fe544031541740cc373676086afdc52e3fad7a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [a3a8915e6c345a50c54e0c4be67595fa294182ca8b3760d4d705b094cfba1128] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-751306
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-751306
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=default-k8s-diff-port-751306
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T14_43_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 14:43:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-751306
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 14:59:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 14:58:58 +0000   Mon, 29 Jul 2024 14:43:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 14:58:58 +0000   Mon, 29 Jul 2024 14:43:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 14:58:58 +0000   Mon, 29 Jul 2024 14:43:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 14:58:58 +0000   Mon, 29 Jul 2024 14:43:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.233
	  Hostname:    default-k8s-diff-port-751306
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7c5d0f2c12df40eea13545c58de8c6ff
	  System UUID:                7c5d0f2c-12df-40ee-a135-45c58de8c6ff
	  Boot ID:                    ce2a1acc-c6b5-4b33-b8fe-c8d27e8b278f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-7qhqh                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-zxmwx                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-default-k8s-diff-port-751306                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-751306             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-751306    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-tqtjx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-751306             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-569cc877fc-z9wg5                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-751306 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-751306 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node default-k8s-diff-port-751306 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node default-k8s-diff-port-751306 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node default-k8s-diff-port-751306 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node default-k8s-diff-port-751306 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node default-k8s-diff-port-751306 event: Registered Node default-k8s-diff-port-751306 in Controller
	
	
	==> dmesg <==
	[  +0.062201] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.052629] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jul29 14:38] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.403922] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.573185] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.297981] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.062685] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066894] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.193315] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.116147] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.305072] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[  +4.529224] systemd-fstab-generator[806]: Ignoring "noauto" option for root device
	[  +0.065971] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.151022] systemd-fstab-generator[928]: Ignoring "noauto" option for root device
	[  +5.664028] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.020312] kauditd_printk_skb: 79 callbacks suppressed
	[Jul29 14:42] kauditd_printk_skb: 3 callbacks suppressed
	[Jul29 14:43] systemd-fstab-generator[3562]: Ignoring "noauto" option for root device
	[  +4.613950] kauditd_printk_skb: 59 callbacks suppressed
	[  +1.449871] systemd-fstab-generator[3881]: Ignoring "noauto" option for root device
	[ +14.397876] systemd-fstab-generator[4100]: Ignoring "noauto" option for root device
	[  +0.084643] kauditd_printk_skb: 14 callbacks suppressed
	[Jul29 14:44] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [374debdcd43ec5f9922cc1fc8045ba8ec8b2735c815c5287104bb35e338b6203] <==
	{"level":"info","ts":"2024-07-29T14:43:16.392389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"defc8511a11a071c is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T14:43:16.392508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"defc8511a11a071c became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T14:43:16.392609Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"defc8511a11a071c received MsgPreVoteResp from defc8511a11a071c at term 1"}
	{"level":"info","ts":"2024-07-29T14:43:16.392648Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"defc8511a11a071c became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T14:43:16.392673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"defc8511a11a071c received MsgVoteResp from defc8511a11a071c at term 2"}
	{"level":"info","ts":"2024-07-29T14:43:16.3927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"defc8511a11a071c became leader at term 2"}
	{"level":"info","ts":"2024-07-29T14:43:16.392725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: defc8511a11a071c elected leader defc8511a11a071c at term 2"}
	{"level":"info","ts":"2024-07-29T14:43:16.397445Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:43:16.399576Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"defc8511a11a071c","local-member-attributes":"{Name:default-k8s-diff-port-751306 ClientURLs:[https://192.168.72.233:2379]}","request-path":"/0/members/defc8511a11a071c/attributes","cluster-id":"85608bfa40f43412","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T14:43:16.399794Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T14:43:16.400193Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T14:43:16.40232Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"85608bfa40f43412","local-member-id":"defc8511a11a071c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:43:16.408368Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:43:16.411341Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:43:16.402396Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T14:43:16.411413Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T14:43:16.405823Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.233:2379"}
	{"level":"info","ts":"2024-07-29T14:43:16.414898Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T14:53:16.570775Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":710}
	{"level":"info","ts":"2024-07-29T14:53:16.581391Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":710,"took":"9.667755ms","hash":872100160,"current-db-size-bytes":2355200,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2355200,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-07-29T14:53:16.581486Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":872100160,"revision":710,"compact-revision":-1}
	{"level":"info","ts":"2024-07-29T14:58:16.577588Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":953}
	{"level":"info","ts":"2024-07-29T14:58:16.581354Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":953,"took":"3.396082ms","hash":1680759334,"current-db-size-bytes":2355200,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1609728,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-29T14:58:16.581403Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1680759334,"revision":953,"compact-revision":710}
	{"level":"info","ts":"2024-07-29T14:59:07.586752Z","caller":"traceutil/trace.go:171","msg":"trace[911037497] transaction","detail":"{read_only:false; response_revision:1239; number_of_response:1; }","duration":"105.078326ms","start":"2024-07-29T14:59:07.481614Z","end":"2024-07-29T14:59:07.586693Z","steps":["trace[911037497] 'process raft request'  (duration: 104.920795ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:59:36 up 21 min,  0 users,  load average: 0.03, 0.11, 0.13
	Linux default-k8s-diff-port-751306 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b6a77e374a9600351c778ccb777afea2c840723fe748ad9ec3447c51c216d0e5] <==
	I0729 14:54:19.278650       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 14:56:19.277035       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 14:56:19.277186       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 14:56:19.277198       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 14:56:19.279242       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 14:56:19.279394       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 14:56:19.279424       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 14:58:18.280433       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 14:58:18.280630       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0729 14:58:19.281431       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 14:58:19.281529       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 14:58:19.281539       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 14:58:19.281438       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 14:58:19.281599       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 14:58:19.282610       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 14:59:19.281852       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 14:59:19.282145       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 14:59:19.282185       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 14:59:19.282912       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 14:59:19.282974       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 14:59:19.284196       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [f8e9a6a0684ae10c3a5ab4d6e2cea31a31b8de3dfb544135fc3b57109dfc74c4] <==
	I0729 14:54:05.242248       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:54:34.732845       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:54:35.261477       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 14:54:39.771776       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="949.799µs"
	I0729 14:54:52.773957       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="166.701µs"
	E0729 14:55:04.739139       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:55:05.270020       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:55:34.744572       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:55:35.278793       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:56:04.750538       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:56:05.286877       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:56:34.760031       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:56:35.299460       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:57:04.765737       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:57:05.308747       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:57:34.771069       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:57:35.316381       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:58:04.776184       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:58:05.328069       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:58:34.781163       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:58:35.339613       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:59:04.786927       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:59:05.351457       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:59:34.794363       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 14:59:35.360964       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a900fda5b73988982b89265c41acdb4ff78f9df6d010ba1115310a197a72bb49] <==
	I0729 14:43:35.270824       1 server_linux.go:69] "Using iptables proxy"
	I0729 14:43:35.283381       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.233"]
	I0729 14:43:35.360745       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 14:43:35.360797       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 14:43:35.360814       1 server_linux.go:165] "Using iptables Proxier"
	I0729 14:43:35.366667       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 14:43:35.366938       1 server.go:872] "Version info" version="v1.30.3"
	I0729 14:43:35.366976       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 14:43:35.375130       1 config.go:192] "Starting service config controller"
	I0729 14:43:35.375168       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 14:43:35.375190       1 config.go:101] "Starting endpoint slice config controller"
	I0729 14:43:35.375193       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 14:43:35.375665       1 config.go:319] "Starting node config controller"
	I0729 14:43:35.375692       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 14:43:35.475432       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 14:43:35.475493       1 shared_informer.go:320] Caches are synced for service config
	I0729 14:43:35.476184       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d29c63a72f53bcd28b18db96b1bda62d0f2eb3f54660499b2aeb5a74c1573a66] <==
	W0729 14:43:18.314461       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 14:43:18.314487       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 14:43:18.314526       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 14:43:18.314551       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 14:43:18.314590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 14:43:18.314615       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 14:43:19.156682       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 14:43:19.156827       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 14:43:19.233212       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 14:43:19.233306       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 14:43:19.291359       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 14:43:19.291408       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 14:43:19.307074       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 14:43:19.307120       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 14:43:19.337754       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 14:43:19.337841       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 14:43:19.441874       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 14:43:19.441922       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 14:43:19.539799       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 14:43:19.539846       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 14:43:19.568729       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 14:43:19.568784       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 14:43:19.600068       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 14:43:19.600117       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0729 14:43:22.403370       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 14:57:20 default-k8s-diff-port-751306 kubelet[3888]: E0729 14:57:20.786565    3888 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 14:57:20 default-k8s-diff-port-751306 kubelet[3888]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 14:57:20 default-k8s-diff-port-751306 kubelet[3888]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 14:57:20 default-k8s-diff-port-751306 kubelet[3888]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 14:57:20 default-k8s-diff-port-751306 kubelet[3888]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 14:57:22 default-k8s-diff-port-751306 kubelet[3888]: E0729 14:57:22.755197    3888 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-z9wg5" podUID="f022dfec-8e97-4679-a7dd-739c9231af82"
	Jul 29 14:57:36 default-k8s-diff-port-751306 kubelet[3888]: E0729 14:57:36.754574    3888 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-z9wg5" podUID="f022dfec-8e97-4679-a7dd-739c9231af82"
	Jul 29 14:57:49 default-k8s-diff-port-751306 kubelet[3888]: E0729 14:57:49.754800    3888 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-z9wg5" podUID="f022dfec-8e97-4679-a7dd-739c9231af82"
	Jul 29 14:58:03 default-k8s-diff-port-751306 kubelet[3888]: E0729 14:58:03.754686    3888 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-z9wg5" podUID="f022dfec-8e97-4679-a7dd-739c9231af82"
	Jul 29 14:58:15 default-k8s-diff-port-751306 kubelet[3888]: E0729 14:58:15.754787    3888 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-z9wg5" podUID="f022dfec-8e97-4679-a7dd-739c9231af82"
	Jul 29 14:58:20 default-k8s-diff-port-751306 kubelet[3888]: E0729 14:58:20.784883    3888 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 14:58:20 default-k8s-diff-port-751306 kubelet[3888]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 14:58:20 default-k8s-diff-port-751306 kubelet[3888]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 14:58:20 default-k8s-diff-port-751306 kubelet[3888]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 14:58:20 default-k8s-diff-port-751306 kubelet[3888]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 14:58:27 default-k8s-diff-port-751306 kubelet[3888]: E0729 14:58:27.755022    3888 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-z9wg5" podUID="f022dfec-8e97-4679-a7dd-739c9231af82"
	Jul 29 14:58:42 default-k8s-diff-port-751306 kubelet[3888]: E0729 14:58:42.756054    3888 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-z9wg5" podUID="f022dfec-8e97-4679-a7dd-739c9231af82"
	Jul 29 14:58:57 default-k8s-diff-port-751306 kubelet[3888]: E0729 14:58:57.754653    3888 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-z9wg5" podUID="f022dfec-8e97-4679-a7dd-739c9231af82"
	Jul 29 14:59:09 default-k8s-diff-port-751306 kubelet[3888]: E0729 14:59:09.754758    3888 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-z9wg5" podUID="f022dfec-8e97-4679-a7dd-739c9231af82"
	Jul 29 14:59:20 default-k8s-diff-port-751306 kubelet[3888]: E0729 14:59:20.787545    3888 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 14:59:20 default-k8s-diff-port-751306 kubelet[3888]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 14:59:20 default-k8s-diff-port-751306 kubelet[3888]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 14:59:20 default-k8s-diff-port-751306 kubelet[3888]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 14:59:20 default-k8s-diff-port-751306 kubelet[3888]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 14:59:24 default-k8s-diff-port-751306 kubelet[3888]: E0729 14:59:24.753599    3888 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-z9wg5" podUID="f022dfec-8e97-4679-a7dd-739c9231af82"
	
	
	==> storage-provisioner [bdd9ff82307b099f776bfa4dac869b5b9dacc92f558831d274af5627900dbce1] <==
	I0729 14:43:36.898474       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 14:43:36.917969       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 14:43:36.918214       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 14:43:36.956215       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 14:43:36.957012       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-751306_c62c2e71-0363-41ed-9cf8-6f9e32f048cc!
	I0729 14:43:36.958805       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5710b067-6ce0-4fdf-b225-4caad0b7f64b", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-751306_c62c2e71-0363-41ed-9cf8-6f9e32f048cc became leader
	I0729 14:43:37.059373       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-751306_c62c2e71-0363-41ed-9cf8-6f9e32f048cc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-751306 -n default-k8s-diff-port-751306
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-751306 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-z9wg5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-751306 describe pod metrics-server-569cc877fc-z9wg5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-751306 describe pod metrics-server-569cc877fc-z9wg5: exit status 1 (65.58153ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-z9wg5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-751306 describe pod metrics-server-569cc877fc-z9wg5: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (413.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (315.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-603534 -n no-preload-603534
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-29 14:58:38.75689783 +0000 UTC m=+6405.164621616
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-603534 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-603534 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.657µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-603534 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-603534 -n no-preload-603534
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-603534 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-603534 logs -n 25: (1.257902239s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-513289 sudo                                 | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo                                 | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo find                            | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo crio                            | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-513289                                      | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	| delete  | -p                                                     | disable-driver-mounts-054967 | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | disable-driver-mounts-054967                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:31 UTC |
	|         | default-k8s-diff-port-751306                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-603534             | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:30 UTC | 29 Jul 24 14:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-603534                                   | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-668123            | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC | 29 Jul 24 14:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-668123                                  | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-751306  | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC | 29 Jul 24 14:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC |                     |
	|         | default-k8s-diff-port-751306                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-603534                  | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-603534 --memory=2200                     | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:32 UTC | 29 Jul 24 14:44 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-360866        | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:33 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-668123                 | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-668123                                  | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:33 UTC | 29 Jul 24 14:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-751306       | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC | 29 Jul 24 14:43 UTC |
	|         | default-k8s-diff-port-751306                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-360866                              | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC | 29 Jul 24 14:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-360866             | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC | 29 Jul 24 14:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-360866                              | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-360866                              | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:58 UTC | 29 Jul 24 14:58 UTC |
	| start   | -p newest-cni-342058 --memory=2200 --alsologtostderr   | newest-cni-342058            | jenkins | v1.33.1 | 29 Jul 24 14:58 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 14:58:35
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 14:58:35.824086 1045900 out.go:291] Setting OutFile to fd 1 ...
	I0729 14:58:35.824438 1045900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:58:35.824453 1045900 out.go:304] Setting ErrFile to fd 2...
	I0729 14:58:35.824460 1045900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:58:35.824649 1045900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 14:58:35.825261 1045900 out.go:298] Setting JSON to false
	I0729 14:58:35.826335 1045900 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":16868,"bootTime":1722248248,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 14:58:35.826397 1045900 start.go:139] virtualization: kvm guest
	I0729 14:58:35.828545 1045900 out.go:177] * [newest-cni-342058] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 14:58:35.829936 1045900 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 14:58:35.829988 1045900 notify.go:220] Checking for updates...
	I0729 14:58:35.832514 1045900 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 14:58:35.833801 1045900 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:58:35.834939 1045900 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:58:35.835984 1045900 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 14:58:35.836986 1045900 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 14:58:35.838590 1045900 config.go:182] Loaded profile config "default-k8s-diff-port-751306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:58:35.838700 1045900 config.go:182] Loaded profile config "embed-certs-668123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:58:35.838797 1045900 config.go:182] Loaded profile config "no-preload-603534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 14:58:35.838921 1045900 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 14:58:35.877174 1045900 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 14:58:35.878316 1045900 start.go:297] selected driver: kvm2
	I0729 14:58:35.878329 1045900 start.go:901] validating driver "kvm2" against <nil>
	I0729 14:58:35.878340 1045900 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 14:58:35.879051 1045900 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:58:35.879135 1045900 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19338-974764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 14:58:35.894392 1045900 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 14:58:35.894442 1045900 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0729 14:58:35.894471 1045900 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0729 14:58:35.894765 1045900 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 14:58:35.894815 1045900 cni.go:84] Creating CNI manager for ""
	I0729 14:58:35.894829 1045900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:58:35.894843 1045900 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 14:58:35.894928 1045900 start.go:340] cluster config:
	{Name:newest-cni-342058 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-342058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:58:35.895077 1045900 iso.go:125] acquiring lock: {Name:mk2bc72146110e230952d77b90cad2ea8182c9d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:58:35.896871 1045900 out.go:177] * Starting "newest-cni-342058" primary control-plane node in "newest-cni-342058" cluster
	I0729 14:58:35.898117 1045900 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 14:58:35.898152 1045900 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 14:58:35.898162 1045900 cache.go:56] Caching tarball of preloaded images
	I0729 14:58:35.898237 1045900 preload.go:172] Found /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 14:58:35.898250 1045900 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0729 14:58:35.898353 1045900 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/config.json ...
	I0729 14:58:35.898372 1045900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/newest-cni-342058/config.json: {Name:mkc145ec5636537f5dfe60e5bf91f2b50771e489 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:58:35.898538 1045900 start.go:360] acquireMachinesLock for newest-cni-342058: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 14:58:35.898573 1045900 start.go:364] duration metric: took 19.613µs to acquireMachinesLock for "newest-cni-342058"
	I0729 14:58:35.898604 1045900 start.go:93] Provisioning new machine with config: &{Name:newest-cni-342058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-342058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:58:35.898687 1045900 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Jul 29 14:58:39 no-preload-603534 crio[704]: time="2024-07-29 14:58:39.433477853Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c6dd43b8-d3c1-4ab9-9b48-fd795de20b53 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:58:39 no-preload-603534 crio[704]: time="2024-07-29 14:58:39.434996042Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dd71bb24-2e54-4786-aabb-d4343e462898 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:58:39 no-preload-603534 crio[704]: time="2024-07-29 14:58:39.435515852Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722265119435489312,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd71bb24-2e54-4786-aabb-d4343e462898 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:58:39 no-preload-603534 crio[704]: time="2024-07-29 14:58:39.436767077Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea627d4b-7285-421e-a6af-788e9fc15c50 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:58:39 no-preload-603534 crio[704]: time="2024-07-29 14:58:39.436829274Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea627d4b-7285-421e-a6af-788e9fc15c50 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:58:39 no-preload-603534 crio[704]: time="2024-07-29 14:58:39.437013077Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:59102fc127eadf5bca454d468d807e4ddc401c25ec256b95cfb40beab761e9ae,PodSandboxId:826d5900986dc5acb6c2971e8d13fe0b8531d69bab08a305af616eea37c3d991,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722264250913736881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7mr4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17de173c-2b95-4b35-a9d7-b38f065270cb,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c34642e35aaf3332f2279dc2227b183712e40694ad294b0123bd4603cc43332,PodSandboxId:00f7003edf93a4900eeda2295188cf2f1063d2585c35b97ac1d9ba682e280aed,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264250848035358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-m6q8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a0c38d-1587-4fdf-b2e6-58d364ca400b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d4e39be60a23aae63d02d8805442bd7a4af5e5e3d7ee48f595c49330e530bf,PodSandboxId:50ed731e85ec5c471ac3d014e97f7802dc0e2e1cd7bb1dc473bf9dc1ca079982,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264250717019254,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-vn8z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4654aadf-7870-46b6-96e6
-5948239fbe22,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb33b2f8ba13c27b9d65919a5f98e9b861ccc2280499c3faa47fa04c5e20ac6,PodSandboxId:03b9c68038f3798da0eca7a7a2e11a80c460b5b6d7aad2e68507ef3e2f7eec13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172226425050
7359919,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7336eb38-d53d-4456-8367-cf843abe5cb5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:350ebe7aa8d4ed7b7d7365d86b2145de78871d9aeba1d8c7620d4fcf38ca4b34,PodSandboxId:e4c5c23be7a6e9973420708cfa9e45bdfb5c766a812474ee569e917671f6bcd3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722264238363781983,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a507bf021d9946f3c35a7a86fe923cbf,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a09ba6b389e63cde39ed3a19ebb621799a59c8eec9365fcc5014d30c475a138,PodSandboxId:a0eb60c303bfd9dde62aa265eac035fe331cde0bfca88501291444bd7744f0b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:172226423836831
8347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030bfbd8969aea4a7e101617f158291c,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3df0b91376802f635a04378c4714bfa34e075b6304b75108a23d0a79238bde4,PodSandboxId:f26c144d676e76d81412db9908d0a5acb239b07dc9834eeb70624c20a0bbcb89,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722264238310299470,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fcccc7872085cdb6b3a955d71b243a1,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c99c444a37edac1fbf9d2386af17d001d83a4362ee1737e7e14d57f16b04005,PodSandboxId:f9ec319d1da3568e0719ca4746afd3b56a358d5a2b86e86009254954c9bd5cb7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722264238275790509,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f72a6b058d2718cb66f4eeea0a3654f,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44890ba7dc13df089f6ade07a2b5affa9a03801256b664bfe53a5fe90ffbe581,PodSandboxId:440219b39f43524fc5f0d664323b1a1731b2d76ea2d0e0fe114483030fb9cc7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722263950405226839,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030bfbd8969aea4a7e101617f158291c,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea627d4b-7285-421e-a6af-788e9fc15c50 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:58:39 no-preload-603534 crio[704]: time="2024-07-29 14:58:39.482898013Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=004d48b0-3de2-47cd-afed-7e54191155f9 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:58:39 no-preload-603534 crio[704]: time="2024-07-29 14:58:39.482968595Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=004d48b0-3de2-47cd-afed-7e54191155f9 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:58:39 no-preload-603534 crio[704]: time="2024-07-29 14:58:39.484062672Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d8c60fb7-2c60-4fdf-b893-5d251d717928 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:58:39 no-preload-603534 crio[704]: time="2024-07-29 14:58:39.484438715Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722265119484414144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8c60fb7-2c60-4fdf-b893-5d251d717928 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:58:39 no-preload-603534 crio[704]: time="2024-07-29 14:58:39.485470196Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8469b2e5-b2b4-4032-b9e1-c0bfbebf4d8b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:58:39 no-preload-603534 crio[704]: time="2024-07-29 14:58:39.485521295Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8469b2e5-b2b4-4032-b9e1-c0bfbebf4d8b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:58:39 no-preload-603534 crio[704]: time="2024-07-29 14:58:39.485826318Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:59102fc127eadf5bca454d468d807e4ddc401c25ec256b95cfb40beab761e9ae,PodSandboxId:826d5900986dc5acb6c2971e8d13fe0b8531d69bab08a305af616eea37c3d991,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722264250913736881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7mr4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17de173c-2b95-4b35-a9d7-b38f065270cb,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c34642e35aaf3332f2279dc2227b183712e40694ad294b0123bd4603cc43332,PodSandboxId:00f7003edf93a4900eeda2295188cf2f1063d2585c35b97ac1d9ba682e280aed,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264250848035358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-m6q8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a0c38d-1587-4fdf-b2e6-58d364ca400b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d4e39be60a23aae63d02d8805442bd7a4af5e5e3d7ee48f595c49330e530bf,PodSandboxId:50ed731e85ec5c471ac3d014e97f7802dc0e2e1cd7bb1dc473bf9dc1ca079982,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264250717019254,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-vn8z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4654aadf-7870-46b6-96e6
-5948239fbe22,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb33b2f8ba13c27b9d65919a5f98e9b861ccc2280499c3faa47fa04c5e20ac6,PodSandboxId:03b9c68038f3798da0eca7a7a2e11a80c460b5b6d7aad2e68507ef3e2f7eec13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172226425050
7359919,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7336eb38-d53d-4456-8367-cf843abe5cb5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:350ebe7aa8d4ed7b7d7365d86b2145de78871d9aeba1d8c7620d4fcf38ca4b34,PodSandboxId:e4c5c23be7a6e9973420708cfa9e45bdfb5c766a812474ee569e917671f6bcd3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722264238363781983,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a507bf021d9946f3c35a7a86fe923cbf,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a09ba6b389e63cde39ed3a19ebb621799a59c8eec9365fcc5014d30c475a138,PodSandboxId:a0eb60c303bfd9dde62aa265eac035fe331cde0bfca88501291444bd7744f0b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:172226423836831
8347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030bfbd8969aea4a7e101617f158291c,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3df0b91376802f635a04378c4714bfa34e075b6304b75108a23d0a79238bde4,PodSandboxId:f26c144d676e76d81412db9908d0a5acb239b07dc9834eeb70624c20a0bbcb89,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722264238310299470,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fcccc7872085cdb6b3a955d71b243a1,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c99c444a37edac1fbf9d2386af17d001d83a4362ee1737e7e14d57f16b04005,PodSandboxId:f9ec319d1da3568e0719ca4746afd3b56a358d5a2b86e86009254954c9bd5cb7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722264238275790509,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f72a6b058d2718cb66f4eeea0a3654f,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44890ba7dc13df089f6ade07a2b5affa9a03801256b664bfe53a5fe90ffbe581,PodSandboxId:440219b39f43524fc5f0d664323b1a1731b2d76ea2d0e0fe114483030fb9cc7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722263950405226839,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030bfbd8969aea4a7e101617f158291c,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8469b2e5-b2b4-4032-b9e1-c0bfbebf4d8b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:58:39 no-preload-603534 crio[704]: time="2024-07-29 14:58:39.500203002Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=15f5505f-f16c-49ab-b794-cea1e6ad39a9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 14:58:39 no-preload-603534 crio[704]: time="2024-07-29 14:58:39.500408963Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:826d5900986dc5acb6c2971e8d13fe0b8531d69bab08a305af616eea37c3d991,Metadata:&PodSandboxMetadata{Name:kube-proxy-7mr4z,Uid:17de173c-2b95-4b35-a9d7-b38f065270cb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722264250505955076,Labels:map[string]string{controller-revision-hash: 6558c48888,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-7mr4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17de173c-2b95-4b35-a9d7-b38f065270cb,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T14:44:08.662585954Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bff127fc3c7af36fbd54e60957ad04d46f49d1c07604e328225a54407e31f00e,Metadata:&PodSandboxMetadata{Name:metrics-server-78fcd8795b-852x6,Uid:637fea9b-2924-4593-a4e2-99a
33ab613d2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722264250492197895,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-78fcd8795b-852x6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 637fea9b-2924-4593-a4e2-99a33ab613d2,k8s-app: metrics-server,pod-template-hash: 78fcd8795b,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T14:44:09.580705386Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:03b9c68038f3798da0eca7a7a2e11a80c460b5b6d7aad2e68507ef3e2f7eec13,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7336eb38-d53d-4456-8367-cf843abe5cb5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722264250356160994,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7336eb38-d
53d-4456-8367-cf843abe5cb5,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T14:44:09.446644688Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:00f7003edf93a4900eeda2295188cf2f1063d2585c35b97ac1d9ba682e280aed,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-m6q8r,Uid
:b3a0c38d-1587-4fdf-b2e6-58d364ca400b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722264250306365400,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-m6q8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a0c38d-1587-4fdf-b2e6-58d364ca400b,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T14:44:09.385876018Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:50ed731e85ec5c471ac3d014e97f7802dc0e2e1cd7bb1dc473bf9dc1ca079982,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-vn8z4,Uid:4654aadf-7870-46b6-96e6-5948239fbe22,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722264250284358121,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-vn8z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4654aadf-7870-46b6-96e6-5948239fbe22,k8s-app: kube-dns,pod-templat
e-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T14:44:09.375056851Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e4c5c23be7a6e9973420708cfa9e45bdfb5c766a812474ee569e917671f6bcd3,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-603534,Uid:a507bf021d9946f3c35a7a86fe923cbf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722264238148317681,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a507bf021d9946f3c35a7a86fe923cbf,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a507bf021d9946f3c35a7a86fe923cbf,kubernetes.io/config.seen: 2024-07-29T14:43:57.668201474Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a0eb60c303bfd9dde62aa265eac035fe331cde0bfca88501291444bd7744f0b9,Metadata:&PodSandboxMeta
data{Name:kube-apiserver-no-preload-603534,Uid:030bfbd8969aea4a7e101617f158291c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722264238147354738,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030bfbd8969aea4a7e101617f158291c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.116:8443,kubernetes.io/config.hash: 030bfbd8969aea4a7e101617f158291c,kubernetes.io/config.seen: 2024-07-29T14:43:57.668200510Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f26c144d676e76d81412db9908d0a5acb239b07dc9834eeb70624c20a0bbcb89,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-603534,Uid:8fcccc7872085cdb6b3a955d71b243a1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722264238125946141,Labels:map[string]string{component: etcd,io.kubernetes
.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fcccc7872085cdb6b3a955d71b243a1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.116:2379,kubernetes.io/config.hash: 8fcccc7872085cdb6b3a955d71b243a1,kubernetes.io/config.seen: 2024-07-29T14:43:57.668198871Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f9ec319d1da3568e0719ca4746afd3b56a358d5a2b86e86009254954c9bd5cb7,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-603534,Uid:1f72a6b058d2718cb66f4eeea0a3654f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722264238108098872,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f72a6b058d2718cb66f4eeea0a3654f,tier: control-plane,},Annotations:map[string]strin
g{kubernetes.io/config.hash: 1f72a6b058d2718cb66f4eeea0a3654f,kubernetes.io/config.seen: 2024-07-29T14:43:57.668195631Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=15f5505f-f16c-49ab-b794-cea1e6ad39a9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 14:58:39 no-preload-603534 crio[704]: time="2024-07-29 14:58:39.501445290Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee6e18db-d551-4893-b96d-597cd7e16ff3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:58:39 no-preload-603534 crio[704]: time="2024-07-29 14:58:39.501495841Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee6e18db-d551-4893-b96d-597cd7e16ff3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:58:39 no-preload-603534 crio[704]: time="2024-07-29 14:58:39.501710590Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:59102fc127eadf5bca454d468d807e4ddc401c25ec256b95cfb40beab761e9ae,PodSandboxId:826d5900986dc5acb6c2971e8d13fe0b8531d69bab08a305af616eea37c3d991,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722264250913736881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7mr4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17de173c-2b95-4b35-a9d7-b38f065270cb,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c34642e35aaf3332f2279dc2227b183712e40694ad294b0123bd4603cc43332,PodSandboxId:00f7003edf93a4900eeda2295188cf2f1063d2585c35b97ac1d9ba682e280aed,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264250848035358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-m6q8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a0c38d-1587-4fdf-b2e6-58d364ca400b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d4e39be60a23aae63d02d8805442bd7a4af5e5e3d7ee48f595c49330e530bf,PodSandboxId:50ed731e85ec5c471ac3d014e97f7802dc0e2e1cd7bb1dc473bf9dc1ca079982,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264250717019254,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-vn8z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4654aadf-7870-46b6-96e6
-5948239fbe22,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb33b2f8ba13c27b9d65919a5f98e9b861ccc2280499c3faa47fa04c5e20ac6,PodSandboxId:03b9c68038f3798da0eca7a7a2e11a80c460b5b6d7aad2e68507ef3e2f7eec13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172226425050
7359919,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7336eb38-d53d-4456-8367-cf843abe5cb5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:350ebe7aa8d4ed7b7d7365d86b2145de78871d9aeba1d8c7620d4fcf38ca4b34,PodSandboxId:e4c5c23be7a6e9973420708cfa9e45bdfb5c766a812474ee569e917671f6bcd3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722264238363781983,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a507bf021d9946f3c35a7a86fe923cbf,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a09ba6b389e63cde39ed3a19ebb621799a59c8eec9365fcc5014d30c475a138,PodSandboxId:a0eb60c303bfd9dde62aa265eac035fe331cde0bfca88501291444bd7744f0b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:172226423836831
8347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030bfbd8969aea4a7e101617f158291c,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3df0b91376802f635a04378c4714bfa34e075b6304b75108a23d0a79238bde4,PodSandboxId:f26c144d676e76d81412db9908d0a5acb239b07dc9834eeb70624c20a0bbcb89,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722264238310299470,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fcccc7872085cdb6b3a955d71b243a1,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c99c444a37edac1fbf9d2386af17d001d83a4362ee1737e7e14d57f16b04005,PodSandboxId:f9ec319d1da3568e0719ca4746afd3b56a358d5a2b86e86009254954c9bd5cb7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722264238275790509,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f72a6b058d2718cb66f4eeea0a3654f,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee6e18db-d551-4893-b96d-597cd7e16ff3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:58:39 no-preload-603534 crio[704]: time="2024-07-29 14:58:39.537058658Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5b8f1636-5cb7-479d-a812-5dbe67dc6b25 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:58:39 no-preload-603534 crio[704]: time="2024-07-29 14:58:39.537133971Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5b8f1636-5cb7-479d-a812-5dbe67dc6b25 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:58:39 no-preload-603534 crio[704]: time="2024-07-29 14:58:39.538700125Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=39c1e111-743a-4b28-8f40-0032e2331b97 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:58:39 no-preload-603534 crio[704]: time="2024-07-29 14:58:39.539052843Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722265119539029964,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=39c1e111-743a-4b28-8f40-0032e2331b97 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:58:39 no-preload-603534 crio[704]: time="2024-07-29 14:58:39.540146027Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ab91e1a-db36-4773-8984-c14b33e4f5ef name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:58:39 no-preload-603534 crio[704]: time="2024-07-29 14:58:39.540202989Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ab91e1a-db36-4773-8984-c14b33e4f5ef name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:58:39 no-preload-603534 crio[704]: time="2024-07-29 14:58:39.540393651Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:59102fc127eadf5bca454d468d807e4ddc401c25ec256b95cfb40beab761e9ae,PodSandboxId:826d5900986dc5acb6c2971e8d13fe0b8531d69bab08a305af616eea37c3d991,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722264250913736881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7mr4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17de173c-2b95-4b35-a9d7-b38f065270cb,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c34642e35aaf3332f2279dc2227b183712e40694ad294b0123bd4603cc43332,PodSandboxId:00f7003edf93a4900eeda2295188cf2f1063d2585c35b97ac1d9ba682e280aed,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264250848035358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-m6q8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a0c38d-1587-4fdf-b2e6-58d364ca400b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d4e39be60a23aae63d02d8805442bd7a4af5e5e3d7ee48f595c49330e530bf,PodSandboxId:50ed731e85ec5c471ac3d014e97f7802dc0e2e1cd7bb1dc473bf9dc1ca079982,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722264250717019254,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-vn8z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4654aadf-7870-46b6-96e6
-5948239fbe22,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb33b2f8ba13c27b9d65919a5f98e9b861ccc2280499c3faa47fa04c5e20ac6,PodSandboxId:03b9c68038f3798da0eca7a7a2e11a80c460b5b6d7aad2e68507ef3e2f7eec13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172226425050
7359919,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7336eb38-d53d-4456-8367-cf843abe5cb5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:350ebe7aa8d4ed7b7d7365d86b2145de78871d9aeba1d8c7620d4fcf38ca4b34,PodSandboxId:e4c5c23be7a6e9973420708cfa9e45bdfb5c766a812474ee569e917671f6bcd3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722264238363781983,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a507bf021d9946f3c35a7a86fe923cbf,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a09ba6b389e63cde39ed3a19ebb621799a59c8eec9365fcc5014d30c475a138,PodSandboxId:a0eb60c303bfd9dde62aa265eac035fe331cde0bfca88501291444bd7744f0b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:172226423836831
8347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030bfbd8969aea4a7e101617f158291c,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3df0b91376802f635a04378c4714bfa34e075b6304b75108a23d0a79238bde4,PodSandboxId:f26c144d676e76d81412db9908d0a5acb239b07dc9834eeb70624c20a0bbcb89,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722264238310299470,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fcccc7872085cdb6b3a955d71b243a1,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c99c444a37edac1fbf9d2386af17d001d83a4362ee1737e7e14d57f16b04005,PodSandboxId:f9ec319d1da3568e0719ca4746afd3b56a358d5a2b86e86009254954c9bd5cb7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722264238275790509,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f72a6b058d2718cb66f4eeea0a3654f,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44890ba7dc13df089f6ade07a2b5affa9a03801256b664bfe53a5fe90ffbe581,PodSandboxId:440219b39f43524fc5f0d664323b1a1731b2d76ea2d0e0fe114483030fb9cc7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722263950405226839,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-603534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030bfbd8969aea4a7e101617f158291c,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ab91e1a-db36-4773-8984-c14b33e4f5ef name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	59102fc127ead       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   14 minutes ago      Running             kube-proxy                0                   826d5900986dc       kube-proxy-7mr4z
	1c34642e35aaf       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   00f7003edf93a       coredns-5cfdc65f69-m6q8r
	f9d4e39be60a2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   50ed731e85ec5       coredns-5cfdc65f69-vn8z4
	bbb33b2f8ba13       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   03b9c68038f37       storage-provisioner
	1a09ba6b389e6       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   14 minutes ago      Running             kube-apiserver            2                   a0eb60c303bfd       kube-apiserver-no-preload-603534
	350ebe7aa8d4e       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   14 minutes ago      Running             kube-controller-manager   2                   e4c5c23be7a6e       kube-controller-manager-no-preload-603534
	a3df0b9137680       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   14 minutes ago      Running             etcd                      2                   f26c144d676e7       etcd-no-preload-603534
	8c99c444a37ed       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   14 minutes ago      Running             kube-scheduler            2                   f9ec319d1da35       kube-scheduler-no-preload-603534
	44890ba7dc13d       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   19 minutes ago      Exited              kube-apiserver            1                   440219b39f435       kube-apiserver-no-preload-603534
	
	
	==> coredns [1c34642e35aaf3332f2279dc2227b183712e40694ad294b0123bd4603cc43332] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f9d4e39be60a23aae63d02d8805442bd7a4af5e5e3d7ee48f595c49330e530bf] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-603534
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-603534
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411
	                    minikube.k8s.io/name=no-preload-603534
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T14_44_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 14:44:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-603534
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 14:58:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 14:54:24 +0000   Mon, 29 Jul 2024 14:43:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 14:54:24 +0000   Mon, 29 Jul 2024 14:43:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 14:54:24 +0000   Mon, 29 Jul 2024 14:43:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 14:54:24 +0000   Mon, 29 Jul 2024 14:44:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.116
	  Hostname:    no-preload-603534
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dac63dda337f45c4af568f12ef5857c7
	  System UUID:                dac63dda-337f-45c4-af56-8f12ef5857c7
	  Boot ID:                    b4dbf66e-f911-43eb-a6ce-460f01ecb2bd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-m6q8r                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-5cfdc65f69-vn8z4                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-603534                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-603534             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-603534    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-7mr4z                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-603534             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-78fcd8795b-852x6              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node no-preload-603534 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node no-preload-603534 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node no-preload-603534 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m   node-controller  Node no-preload-603534 event: Registered Node no-preload-603534 in Controller
	
	
	==> dmesg <==
	[  +0.047878] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.280113] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.682591] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.600780] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.811417] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.061748] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059691] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.173086] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.150180] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.297577] systemd-fstab-generator[688]: Ignoring "noauto" option for root device
	[Jul29 14:39] systemd-fstab-generator[1149]: Ignoring "noauto" option for root device
	[  +0.062616] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.993130] systemd-fstab-generator[1271]: Ignoring "noauto" option for root device
	[  +3.594472] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.391279] kauditd_printk_skb: 53 callbacks suppressed
	[ +10.056526] kauditd_printk_skb: 30 callbacks suppressed
	[Jul29 14:43] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.252677] systemd-fstab-generator[2932]: Ignoring "noauto" option for root device
	[Jul29 14:44] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.482864] systemd-fstab-generator[3253]: Ignoring "noauto" option for root device
	[  +5.396867] systemd-fstab-generator[3386]: Ignoring "noauto" option for root device
	[  +0.113093] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.778774] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [a3df0b91376802f635a04378c4714bfa34e075b6304b75108a23d0a79238bde4] <==
	{"level":"info","ts":"2024-07-29T14:43:58.623003Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"3ff2c8dabfa88909","initial-advertise-peer-urls":["https://192.168.61.116:2380"],"listen-peer-urls":["https://192.168.61.116:2380"],"advertise-client-urls":["https://192.168.61.116:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.116:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T14:43:58.623051Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T14:43:58.966588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3ff2c8dabfa88909 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T14:43:58.966637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3ff2c8dabfa88909 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T14:43:58.966664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3ff2c8dabfa88909 received MsgPreVoteResp from 3ff2c8dabfa88909 at term 1"}
	{"level":"info","ts":"2024-07-29T14:43:58.966678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3ff2c8dabfa88909 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T14:43:58.966683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3ff2c8dabfa88909 received MsgVoteResp from 3ff2c8dabfa88909 at term 2"}
	{"level":"info","ts":"2024-07-29T14:43:58.966691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3ff2c8dabfa88909 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T14:43:58.966698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3ff2c8dabfa88909 elected leader 3ff2c8dabfa88909 at term 2"}
	{"level":"info","ts":"2024-07-29T14:43:58.970818Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"3ff2c8dabfa88909","local-member-attributes":"{Name:no-preload-603534 ClientURLs:[https://192.168.61.116:2379]}","request-path":"/0/members/3ff2c8dabfa88909/attributes","cluster-id":"d8013dd48c9fa2cd","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T14:43:58.970876Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T14:43:58.971331Z","caller":"etcdserver/server.go:2628","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:43:58.976141Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T14:43:58.979215Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T14:43:58.979326Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d8013dd48c9fa2cd","local-member-id":"3ff2c8dabfa88909","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:43:58.979404Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:43:58.979422Z","caller":"etcdserver/server.go:2652","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T14:43:58.979695Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T14:43:58.993042Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T14:43:58.993081Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T14:43:58.993728Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T14:43:58.994394Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.116:2379"}
	{"level":"info","ts":"2024-07-29T14:53:59.297848Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":702}
	{"level":"info","ts":"2024-07-29T14:53:59.313862Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":702,"took":"15.642584ms","hash":893838012,"current-db-size-bytes":2166784,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2166784,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-07-29T14:53:59.313926Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":893838012,"revision":702,"compact-revision":-1}
	
	
	==> kernel <==
	 14:58:39 up 20 min,  0 users,  load average: 0.49, 0.24, 0.15
	Linux no-preload-603534 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1a09ba6b389e63cde39ed3a19ebb621799a59c8eec9365fcc5014d30c475a138] <==
	W0729 14:54:01.842039       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 14:54:01.842087       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0729 14:54:01.843240       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 14:54:01.843277       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 14:55:01.843945       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 14:55:01.844062       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0729 14:55:01.843978       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 14:55:01.844153       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0729 14:55:01.845300       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 14:55:01.845342       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 14:57:01.846260       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 14:57:01.846430       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0729 14:57:01.846261       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 14:57:01.846476       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0729 14:57:01.847612       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 14:57:01.847649       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [44890ba7dc13df089f6ade07a2b5affa9a03801256b664bfe53a5fe90ffbe581] <==
	W0729 14:43:50.576792       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.578289       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.602780       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.647047       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.657458       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.671063       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.725113       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.727665       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.745499       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.801221       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.807707       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.839184       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.849979       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.857993       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.862687       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.889843       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:50.955171       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:51.045056       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:51.086354       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:51.255212       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:51.382109       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:54.737595       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:55.189506       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:55.295155       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 14:43:55.301737       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [350ebe7aa8d4ed7b7d7365d86b2145de78871d9aeba1d8c7620d4fcf38ca4b34] <==
	E0729 14:53:38.890235       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 14:53:38.934438       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:54:08.897484       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 14:54:08.942233       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 14:54:24.480112       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-603534"
	E0729 14:54:38.904060       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 14:54:38.949812       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 14:55:04.520254       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="277.368µs"
	E0729 14:55:08.912817       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 14:55:08.957638       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 14:55:15.518922       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="109.135µs"
	E0729 14:55:38.919502       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 14:55:38.966206       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:56:08.928088       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 14:56:08.973748       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:56:38.934665       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 14:56:38.982044       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:57:08.942082       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 14:57:08.990191       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:57:38.949393       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 14:57:38.999693       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:58:08.956192       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 14:58:09.008787       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 14:58:38.963295       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 14:58:39.018349       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [59102fc127eadf5bca454d468d807e4ddc401c25ec256b95cfb40beab761e9ae] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0729 14:44:11.309249       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0729 14:44:11.320203       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.61.116"]
	E0729 14:44:11.320330       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0729 14:44:11.358497       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0729 14:44:11.358612       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 14:44:11.358645       1 server_linux.go:170] "Using iptables Proxier"
	I0729 14:44:11.362226       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0729 14:44:11.362525       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0729 14:44:11.362714       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 14:44:11.364192       1 config.go:197] "Starting service config controller"
	I0729 14:44:11.364234       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 14:44:11.364269       1 config.go:104] "Starting endpoint slice config controller"
	I0729 14:44:11.364285       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 14:44:11.365821       1 config.go:326] "Starting node config controller"
	I0729 14:44:11.365919       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 14:44:11.465397       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 14:44:11.465518       1 shared_informer.go:320] Caches are synced for service config
	I0729 14:44:11.466224       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8c99c444a37edac1fbf9d2386af17d001d83a4362ee1737e7e14d57f16b04005] <==
	W0729 14:44:00.886454       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 14:44:00.895246       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0729 14:44:01.772476       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 14:44:01.773940       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0729 14:44:01.839906       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 14:44:01.840221       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 14:44:01.872078       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 14:44:01.872163       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0729 14:44:01.924395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 14:44:01.924448       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 14:44:01.948591       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 14:44:01.948633       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0729 14:44:01.976852       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 14:44:01.976911       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0729 14:44:02.044492       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 14:44:02.044785       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 14:44:02.087444       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 14:44:02.087585       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0729 14:44:02.096509       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 14:44:02.096656       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 14:44:02.122914       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 14:44:02.123406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 14:44:02.176934       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 14:44:02.176985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0729 14:44:05.165124       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 14:56:03 no-preload-603534 kubelet[3260]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 14:56:03 no-preload-603534 kubelet[3260]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 14:56:03 no-preload-603534 kubelet[3260]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 14:56:09 no-preload-603534 kubelet[3260]: E0729 14:56:09.504970    3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-852x6" podUID="637fea9b-2924-4593-a4e2-99a33ab613d2"
	Jul 29 14:56:24 no-preload-603534 kubelet[3260]: E0729 14:56:24.499061    3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-852x6" podUID="637fea9b-2924-4593-a4e2-99a33ab613d2"
	Jul 29 14:56:36 no-preload-603534 kubelet[3260]: E0729 14:56:36.500524    3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-852x6" podUID="637fea9b-2924-4593-a4e2-99a33ab613d2"
	Jul 29 14:56:51 no-preload-603534 kubelet[3260]: E0729 14:56:51.499639    3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-852x6" podUID="637fea9b-2924-4593-a4e2-99a33ab613d2"
	Jul 29 14:57:03 no-preload-603534 kubelet[3260]: E0729 14:57:03.518947    3260 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 14:57:03 no-preload-603534 kubelet[3260]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 14:57:03 no-preload-603534 kubelet[3260]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 14:57:03 no-preload-603534 kubelet[3260]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 14:57:03 no-preload-603534 kubelet[3260]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 14:57:04 no-preload-603534 kubelet[3260]: E0729 14:57:04.499296    3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-852x6" podUID="637fea9b-2924-4593-a4e2-99a33ab613d2"
	Jul 29 14:57:18 no-preload-603534 kubelet[3260]: E0729 14:57:18.498184    3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-852x6" podUID="637fea9b-2924-4593-a4e2-99a33ab613d2"
	Jul 29 14:57:29 no-preload-603534 kubelet[3260]: E0729 14:57:29.499190    3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-852x6" podUID="637fea9b-2924-4593-a4e2-99a33ab613d2"
	Jul 29 14:57:43 no-preload-603534 kubelet[3260]: E0729 14:57:43.500377    3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-852x6" podUID="637fea9b-2924-4593-a4e2-99a33ab613d2"
	Jul 29 14:57:55 no-preload-603534 kubelet[3260]: E0729 14:57:55.501267    3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-852x6" podUID="637fea9b-2924-4593-a4e2-99a33ab613d2"
	Jul 29 14:58:03 no-preload-603534 kubelet[3260]: E0729 14:58:03.517411    3260 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 14:58:03 no-preload-603534 kubelet[3260]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 14:58:03 no-preload-603534 kubelet[3260]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 14:58:03 no-preload-603534 kubelet[3260]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 14:58:03 no-preload-603534 kubelet[3260]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 14:58:06 no-preload-603534 kubelet[3260]: E0729 14:58:06.498361    3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-852x6" podUID="637fea9b-2924-4593-a4e2-99a33ab613d2"
	Jul 29 14:58:18 no-preload-603534 kubelet[3260]: E0729 14:58:18.499168    3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-852x6" podUID="637fea9b-2924-4593-a4e2-99a33ab613d2"
	Jul 29 14:58:31 no-preload-603534 kubelet[3260]: E0729 14:58:31.502763    3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-852x6" podUID="637fea9b-2924-4593-a4e2-99a33ab613d2"
	
	
	==> storage-provisioner [bbb33b2f8ba13c27b9d65919a5f98e9b861ccc2280499c3faa47fa04c5e20ac6] <==
	I0729 14:44:10.719049       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 14:44:10.815829       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 14:44:10.815937       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 14:44:10.840434       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 14:44:10.842964       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-603534_f855ee1d-7237-41ad-b4c5-a39692277466!
	I0729 14:44:10.844237       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"be182905-5f1a-4b11-a1af-a0aaaa08f016", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-603534_f855ee1d-7237-41ad-b4c5-a39692277466 became leader
	I0729 14:44:10.943739       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-603534_f855ee1d-7237-41ad-b4c5-a39692277466!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-603534 -n no-preload-603534
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-603534 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-852x6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-603534 describe pod metrics-server-78fcd8795b-852x6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-603534 describe pod metrics-server-78fcd8795b-852x6: exit status 1 (72.372139ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-852x6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-603534 describe pod metrics-server-78fcd8795b-852x6: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (315.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (169.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:56:10.159862  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kindnet-513289/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:57:06.665720  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:57:22.664727  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/calico-513289/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:57:59.776040  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/custom-flannel-513289/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
E0729 14:58:08.881814  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/enable-default-cni-513289/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.71:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.71:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-360866 -n old-k8s-version-360866
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-360866 -n old-k8s-version-360866: exit status 2 (240.774522ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-360866" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-360866 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-360866 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.987µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-360866 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-360866 -n old-k8s-version-360866
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-360866 -n old-k8s-version-360866: exit status 2 (224.021557ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-360866 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-360866 logs -n 25: (1.649384434s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-513289 sudo cat                             | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo                                 | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo                                 | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo                                 | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo find                            | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-513289 sudo crio                            | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-513289                                      | flannel-513289               | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	| delete  | -p                                                     | disable-driver-mounts-054967 | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:29 UTC |
	|         | disable-driver-mounts-054967                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:29 UTC | 29 Jul 24 14:31 UTC |
	|         | default-k8s-diff-port-751306                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-603534             | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:30 UTC | 29 Jul 24 14:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-603534                                   | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-668123            | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC | 29 Jul 24 14:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-668123                                  | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-751306  | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC | 29 Jul 24 14:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:31 UTC |                     |
	|         | default-k8s-diff-port-751306                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-603534                  | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-603534 --memory=2200                     | no-preload-603534            | jenkins | v1.33.1 | 29 Jul 24 14:32 UTC | 29 Jul 24 14:44 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-360866        | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:33 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-668123                 | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:33 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-668123                                  | embed-certs-668123           | jenkins | v1.33.1 | 29 Jul 24 14:33 UTC | 29 Jul 24 14:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-751306       | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-751306 | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC | 29 Jul 24 14:43 UTC |
	|         | default-k8s-diff-port-751306                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-360866                              | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC | 29 Jul 24 14:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-360866             | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC | 29 Jul 24 14:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-360866                              | old-k8s-version-360866       | jenkins | v1.33.1 | 29 Jul 24 14:34 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 14:34:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 14:34:53.874295 1039759 out.go:291] Setting OutFile to fd 1 ...
	I0729 14:34:53.874567 1039759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:34:53.874577 1039759 out.go:304] Setting ErrFile to fd 2...
	I0729 14:34:53.874580 1039759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:34:53.874774 1039759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 14:34:53.875294 1039759 out.go:298] Setting JSON to false
	I0729 14:34:53.876313 1039759 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":15446,"bootTime":1722248248,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 14:34:53.876373 1039759 start.go:139] virtualization: kvm guest
	I0729 14:34:53.878446 1039759 out.go:177] * [old-k8s-version-360866] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 14:34:53.879820 1039759 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 14:34:53.879855 1039759 notify.go:220] Checking for updates...
	I0729 14:34:53.882201 1039759 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 14:34:53.883330 1039759 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:34:53.884514 1039759 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:34:53.885734 1039759 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 14:34:53.886894 1039759 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 14:34:53.888361 1039759 config.go:182] Loaded profile config "old-k8s-version-360866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 14:34:53.888789 1039759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:34:53.888850 1039759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:34:53.903960 1039759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I0729 14:34:53.904467 1039759 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:34:53.905083 1039759 main.go:141] libmachine: Using API Version  1
	I0729 14:34:53.905112 1039759 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:34:53.905449 1039759 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:34:53.905609 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:34:53.907360 1039759 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 14:34:53.908710 1039759 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 14:34:53.909026 1039759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:34:53.909064 1039759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:34:53.923834 1039759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0729 14:34:53.924300 1039759 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:34:53.924787 1039759 main.go:141] libmachine: Using API Version  1
	I0729 14:34:53.924809 1039759 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:34:53.925150 1039759 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:34:53.925352 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:34:53.960368 1039759 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 14:34:53.961649 1039759 start.go:297] selected driver: kvm2
	I0729 14:34:53.961662 1039759 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:34:53.961778 1039759 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 14:34:53.962398 1039759 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:34:53.962459 1039759 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19338-974764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 14:34:53.977941 1039759 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 14:34:53.978311 1039759 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:34:53.978341 1039759 cni.go:84] Creating CNI manager for ""
	I0729 14:34:53.978350 1039759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:34:53.978395 1039759 start.go:340] cluster config:
	{Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:34:53.978499 1039759 iso.go:125] acquiring lock: {Name:mk2bc72146110e230952d77b90cad2ea8182c9d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 14:34:53.980167 1039759 out.go:177] * Starting "old-k8s-version-360866" primary control-plane node in "old-k8s-version-360866" cluster
	I0729 14:34:55.588663 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:34:53.981356 1039759 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 14:34:53.981390 1039759 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 14:34:53.981400 1039759 cache.go:56] Caching tarball of preloaded images
	I0729 14:34:53.981477 1039759 preload.go:172] Found /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 14:34:53.981487 1039759 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 14:34:53.981600 1039759 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/config.json ...
	I0729 14:34:53.981775 1039759 start.go:360] acquireMachinesLock for old-k8s-version-360866: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 14:34:58.660730 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:04.740665 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:07.812781 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:13.892659 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:16.964692 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:23.044749 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:26.116761 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:32.196730 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:35.268709 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:41.348712 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:44.420693 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:50.500715 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:53.572717 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:35:59.652707 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:02.724722 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:08.804719 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:11.876665 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:17.956684 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:21.028707 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:27.108667 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:30.180710 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:36.260645 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:39.332717 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:45.412694 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:48.484713 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:54.564703 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:36:57.636707 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:03.716690 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:06.788660 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:12.868658 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:15.940708 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:22.020684 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:25.092712 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:31.172710 1038758 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.116:22: connect: no route to host
	I0729 14:37:34.177216 1039263 start.go:364] duration metric: took 3m42.890532077s to acquireMachinesLock for "embed-certs-668123"
	I0729 14:37:34.177291 1039263 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:37:34.177300 1039263 fix.go:54] fixHost starting: 
	I0729 14:37:34.177641 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:37:34.177673 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:37:34.193427 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37577
	I0729 14:37:34.193879 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:37:34.194396 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:37:34.194421 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:37:34.194774 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:37:34.194987 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:34.195156 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:37:34.196597 1039263 fix.go:112] recreateIfNeeded on embed-certs-668123: state=Stopped err=<nil>
	I0729 14:37:34.196642 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	W0729 14:37:34.196802 1039263 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:37:34.198564 1039263 out.go:177] * Restarting existing kvm2 VM for "embed-certs-668123" ...
	I0729 14:37:34.199926 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Start
	I0729 14:37:34.200086 1039263 main.go:141] libmachine: (embed-certs-668123) Ensuring networks are active...
	I0729 14:37:34.200833 1039263 main.go:141] libmachine: (embed-certs-668123) Ensuring network default is active
	I0729 14:37:34.201159 1039263 main.go:141] libmachine: (embed-certs-668123) Ensuring network mk-embed-certs-668123 is active
	I0729 14:37:34.201578 1039263 main.go:141] libmachine: (embed-certs-668123) Getting domain xml...
	I0729 14:37:34.202214 1039263 main.go:141] libmachine: (embed-certs-668123) Creating domain...
	I0729 14:37:34.510575 1039263 main.go:141] libmachine: (embed-certs-668123) Waiting to get IP...
	I0729 14:37:34.511459 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:34.511909 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:34.512006 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:34.511904 1040307 retry.go:31] will retry after 294.890973ms: waiting for machine to come up
	I0729 14:37:34.808513 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:34.809044 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:34.809070 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:34.809007 1040307 retry.go:31] will retry after 296.152247ms: waiting for machine to come up
	I0729 14:37:35.106423 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:35.106839 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:35.106872 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:35.106773 1040307 retry.go:31] will retry after 384.830082ms: waiting for machine to come up
	I0729 14:37:35.493463 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:35.493902 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:35.493933 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:35.493861 1040307 retry.go:31] will retry after 490.673812ms: waiting for machine to come up
	I0729 14:37:35.986675 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:35.987184 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:35.987235 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:35.987099 1040307 retry.go:31] will retry after 725.022775ms: waiting for machine to come up
	I0729 14:37:34.174673 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:37:34.174713 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:37:34.175060 1038758 buildroot.go:166] provisioning hostname "no-preload-603534"
	I0729 14:37:34.175084 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:37:34.175279 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:37:34.177042 1038758 machine.go:97] duration metric: took 4m37.39644293s to provisionDockerMachine
	I0729 14:37:34.177087 1038758 fix.go:56] duration metric: took 4m37.417815827s for fixHost
	I0729 14:37:34.177094 1038758 start.go:83] releasing machines lock for "no-preload-603534", held for 4m37.417912853s
	W0729 14:37:34.177127 1038758 start.go:714] error starting host: provision: host is not running
	W0729 14:37:34.177230 1038758 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 14:37:34.177240 1038758 start.go:729] Will try again in 5 seconds ...
	I0729 14:37:36.714078 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:36.714502 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:36.714565 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:36.714389 1040307 retry.go:31] will retry after 722.684756ms: waiting for machine to come up
	I0729 14:37:37.438316 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:37.438859 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:37.438891 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:37.438802 1040307 retry.go:31] will retry after 1.163999997s: waiting for machine to come up
	I0729 14:37:38.604109 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:38.604503 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:38.604531 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:38.604469 1040307 retry.go:31] will retry after 1.401566003s: waiting for machine to come up
	I0729 14:37:40.007310 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:40.007900 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:40.007929 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:40.007839 1040307 retry.go:31] will retry after 1.40470791s: waiting for machine to come up
	I0729 14:37:39.178982 1038758 start.go:360] acquireMachinesLock for no-preload-603534: {Name:mk751e57256ca523e1aae60bb753bc041a65d89e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 14:37:41.414509 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:41.415018 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:41.415049 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:41.414959 1040307 retry.go:31] will retry after 2.205183048s: waiting for machine to come up
	I0729 14:37:43.623427 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:43.623894 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:43.623922 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:43.623856 1040307 retry.go:31] will retry after 2.444881913s: waiting for machine to come up
	I0729 14:37:46.070961 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:46.071314 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:46.071338 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:46.071271 1040307 retry.go:31] will retry after 3.115189863s: waiting for machine to come up
	I0729 14:37:49.187610 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:49.188107 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | unable to find current IP address of domain embed-certs-668123 in network mk-embed-certs-668123
	I0729 14:37:49.188134 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | I0729 14:37:49.188054 1040307 retry.go:31] will retry after 3.139484284s: waiting for machine to come up
	I0729 14:37:53.653416 1039440 start.go:364] duration metric: took 3m41.12464482s to acquireMachinesLock for "default-k8s-diff-port-751306"
	I0729 14:37:53.653486 1039440 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:37:53.653494 1039440 fix.go:54] fixHost starting: 
	I0729 14:37:53.653880 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:37:53.653913 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:37:53.671499 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34797
	I0729 14:37:53.671927 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:37:53.672550 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:37:53.672584 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:37:53.672986 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:37:53.673198 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:37:53.673353 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:37:53.674706 1039440 fix.go:112] recreateIfNeeded on default-k8s-diff-port-751306: state=Stopped err=<nil>
	I0729 14:37:53.674736 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	W0729 14:37:53.674896 1039440 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:37:53.677098 1039440 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-751306" ...
	I0729 14:37:52.329477 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.329880 1039263 main.go:141] libmachine: (embed-certs-668123) Found IP for machine: 192.168.50.53
	I0729 14:37:52.329895 1039263 main.go:141] libmachine: (embed-certs-668123) Reserving static IP address...
	I0729 14:37:52.329906 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has current primary IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.330376 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "embed-certs-668123", mac: "52:54:00:a3:92:a4", ip: "192.168.50.53"} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.330414 1039263 main.go:141] libmachine: (embed-certs-668123) Reserved static IP address: 192.168.50.53
	I0729 14:37:52.330433 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | skip adding static IP to network mk-embed-certs-668123 - found existing host DHCP lease matching {name: "embed-certs-668123", mac: "52:54:00:a3:92:a4", ip: "192.168.50.53"}
	I0729 14:37:52.330453 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Getting to WaitForSSH function...
	I0729 14:37:52.330465 1039263 main.go:141] libmachine: (embed-certs-668123) Waiting for SSH to be available...
	I0729 14:37:52.332510 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.332794 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.332821 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.332897 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Using SSH client type: external
	I0729 14:37:52.332931 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa (-rw-------)
	I0729 14:37:52.332963 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:37:52.332976 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | About to run SSH command:
	I0729 14:37:52.332989 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | exit 0
	I0729 14:37:52.456152 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | SSH cmd err, output: <nil>: 
	I0729 14:37:52.456532 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetConfigRaw
	I0729 14:37:52.457156 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetIP
	I0729 14:37:52.459620 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.459946 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.459980 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.460200 1039263 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/config.json ...
	I0729 14:37:52.460384 1039263 machine.go:94] provisionDockerMachine start ...
	I0729 14:37:52.460404 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:52.460672 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:52.462798 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.463089 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.463119 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.463260 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:52.463428 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.463594 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.463703 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:52.463856 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:52.464071 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:52.464080 1039263 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:37:52.564925 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 14:37:52.564959 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetMachineName
	I0729 14:37:52.565214 1039263 buildroot.go:166] provisioning hostname "embed-certs-668123"
	I0729 14:37:52.565241 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetMachineName
	I0729 14:37:52.565472 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:52.568131 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.568450 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.568482 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.568615 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:52.568825 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.568975 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.569143 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:52.569335 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:52.569511 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:52.569522 1039263 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-668123 && echo "embed-certs-668123" | sudo tee /etc/hostname
	I0729 14:37:52.686424 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-668123
	
	I0729 14:37:52.686459 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:52.689074 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.689387 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.689422 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.689619 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:52.689825 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.689999 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:52.690164 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:52.690338 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:52.690511 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:52.690526 1039263 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-668123' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-668123/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-668123' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:37:52.801778 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:37:52.801812 1039263 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:37:52.801841 1039263 buildroot.go:174] setting up certificates
	I0729 14:37:52.801851 1039263 provision.go:84] configureAuth start
	I0729 14:37:52.801863 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetMachineName
	I0729 14:37:52.802133 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetIP
	I0729 14:37:52.804526 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.804877 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.804910 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.805053 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:52.807140 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.807369 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:52.807395 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:52.807505 1039263 provision.go:143] copyHostCerts
	I0729 14:37:52.807594 1039263 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:37:52.807608 1039263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:37:52.807698 1039263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:37:52.807840 1039263 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:37:52.807852 1039263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:37:52.807891 1039263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:37:52.807969 1039263 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:37:52.807979 1039263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:37:52.808011 1039263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:37:52.808084 1039263 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.embed-certs-668123 san=[127.0.0.1 192.168.50.53 embed-certs-668123 localhost minikube]
	I0729 14:37:53.007382 1039263 provision.go:177] copyRemoteCerts
	I0729 14:37:53.007459 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:37:53.007548 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.010097 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.010465 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.010488 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.010660 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.010864 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.011037 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.011193 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:37:53.092043 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 14:37:53.116737 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:37:53.139828 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:37:53.162813 1039263 provision.go:87] duration metric: took 360.943219ms to configureAuth
	I0729 14:37:53.162856 1039263 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:37:53.163051 1039263 config.go:182] Loaded profile config "embed-certs-668123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:37:53.163144 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.165757 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.166102 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.166130 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.166272 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.166465 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.166665 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.166817 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.166983 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:53.167154 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:53.167169 1039263 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:37:53.428217 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:37:53.428246 1039263 machine.go:97] duration metric: took 967.84942ms to provisionDockerMachine
	I0729 14:37:53.428258 1039263 start.go:293] postStartSetup for "embed-certs-668123" (driver="kvm2")
	I0729 14:37:53.428269 1039263 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:37:53.428298 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.428641 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:37:53.428669 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.431228 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.431593 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.431620 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.431797 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.431992 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.432159 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.432313 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:37:53.511226 1039263 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:37:53.515527 1039263 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:37:53.515555 1039263 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:37:53.515635 1039263 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:37:53.515724 1039263 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:37:53.515846 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:37:53.525606 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:37:53.548757 1039263 start.go:296] duration metric: took 120.484005ms for postStartSetup
	I0729 14:37:53.548798 1039263 fix.go:56] duration metric: took 19.371497305s for fixHost
	I0729 14:37:53.548827 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.551373 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.551697 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.551725 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.551866 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.552085 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.552226 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.552383 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.552574 1039263 main.go:141] libmachine: Using SSH client type: native
	I0729 14:37:53.552746 1039263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0729 14:37:53.552756 1039263 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:37:53.653267 1039263 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263873.628230451
	
	I0729 14:37:53.653291 1039263 fix.go:216] guest clock: 1722263873.628230451
	I0729 14:37:53.653301 1039263 fix.go:229] Guest: 2024-07-29 14:37:53.628230451 +0000 UTC Remote: 2024-07-29 14:37:53.548802078 +0000 UTC m=+242.399919494 (delta=79.428373ms)
	I0729 14:37:53.653329 1039263 fix.go:200] guest clock delta is within tolerance: 79.428373ms
	I0729 14:37:53.653337 1039263 start.go:83] releasing machines lock for "embed-certs-668123", held for 19.476079428s
	I0729 14:37:53.653364 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.653673 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetIP
	I0729 14:37:53.656383 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.656805 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.656836 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.656958 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.657597 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.657831 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:37:53.657923 1039263 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:37:53.657981 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.658101 1039263 ssh_runner.go:195] Run: cat /version.json
	I0729 14:37:53.658129 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:37:53.660964 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.661044 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.661349 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.661374 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.661400 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:53.661446 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:53.661628 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.661711 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:37:53.661795 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.661918 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:37:53.662012 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.662092 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:37:53.662200 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:37:53.662234 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:37:53.764286 1039263 ssh_runner.go:195] Run: systemctl --version
	I0729 14:37:53.772936 1039263 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:37:53.922874 1039263 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:37:53.928953 1039263 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:37:53.929035 1039263 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:37:53.947388 1039263 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:37:53.947417 1039263 start.go:495] detecting cgroup driver to use...
	I0729 14:37:53.947496 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:37:53.964141 1039263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:37:53.985980 1039263 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:37:53.986042 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:37:54.009646 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:37:54.023449 1039263 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:37:54.139511 1039263 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:37:54.312559 1039263 docker.go:233] disabling docker service ...
	I0729 14:37:54.312655 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:37:54.327466 1039263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:37:54.342225 1039263 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:37:54.485007 1039263 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:37:54.623987 1039263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:37:54.638100 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:37:54.658833 1039263 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 14:37:54.658911 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.670274 1039263 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:37:54.670366 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.681548 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.691626 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.701915 1039263 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:37:54.713399 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.723631 1039263 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.740625 1039263 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:37:54.751521 1039263 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:37:54.761895 1039263 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:37:54.761942 1039263 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:37:54.775663 1039263 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:37:54.785415 1039263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:37:54.933441 1039263 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:37:55.066449 1039263 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:37:55.066539 1039263 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:37:55.071614 1039263 start.go:563] Will wait 60s for crictl version
	I0729 14:37:55.071671 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:37:55.075727 1039263 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:37:55.117286 1039263 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:37:55.117395 1039263 ssh_runner.go:195] Run: crio --version
	I0729 14:37:55.145732 1039263 ssh_runner.go:195] Run: crio --version
	I0729 14:37:55.179714 1039263 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 14:37:55.181109 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetIP
	I0729 14:37:55.184274 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:55.184734 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:37:55.184761 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:37:55.185066 1039263 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 14:37:55.190374 1039263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:37:55.206768 1039263 kubeadm.go:883] updating cluster {Name:embed-certs-668123 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-668123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:37:55.207054 1039263 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 14:37:55.207130 1039263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:37:55.247814 1039263 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 14:37:55.247890 1039263 ssh_runner.go:195] Run: which lz4
	I0729 14:37:55.251992 1039263 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 14:37:55.256440 1039263 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 14:37:55.256468 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 14:37:53.678402 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Start
	I0729 14:37:53.678610 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Ensuring networks are active...
	I0729 14:37:53.679311 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Ensuring network default is active
	I0729 14:37:53.679767 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Ensuring network mk-default-k8s-diff-port-751306 is active
	I0729 14:37:53.680133 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Getting domain xml...
	I0729 14:37:53.680818 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Creating domain...
	I0729 14:37:54.024601 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting to get IP...
	I0729 14:37:54.025431 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.025838 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.025944 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:54.025837 1040438 retry.go:31] will retry after 280.254814ms: waiting for machine to come up
	I0729 14:37:54.307727 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.308260 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.308293 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:54.308220 1040438 retry.go:31] will retry after 384.348242ms: waiting for machine to come up
	I0729 14:37:54.693703 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.694304 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:54.694334 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:54.694251 1040438 retry.go:31] will retry after 417.76448ms: waiting for machine to come up
	I0729 14:37:55.113670 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:55.114243 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:55.114272 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:55.114191 1040438 retry.go:31] will retry after 589.741485ms: waiting for machine to come up
	I0729 14:37:55.706127 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:55.706613 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:55.706646 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:55.706569 1040438 retry.go:31] will retry after 471.427821ms: waiting for machine to come up
	I0729 14:37:56.179380 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:56.179867 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:56.179896 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:56.179814 1040438 retry.go:31] will retry after 624.275074ms: waiting for machine to come up
	I0729 14:37:56.805673 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:56.806042 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:56.806063 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:56.806018 1040438 retry.go:31] will retry after 1.027377333s: waiting for machine to come up
	I0729 14:37:56.743842 1039263 crio.go:462] duration metric: took 1.49188656s to copy over tarball
	I0729 14:37:56.743941 1039263 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 14:37:58.879573 1039263 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.135595087s)
	I0729 14:37:58.879619 1039263 crio.go:469] duration metric: took 2.135735155s to extract the tarball
	I0729 14:37:58.879628 1039263 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 14:37:58.916966 1039263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:37:58.958323 1039263 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 14:37:58.958349 1039263 cache_images.go:84] Images are preloaded, skipping loading
	I0729 14:37:58.958357 1039263 kubeadm.go:934] updating node { 192.168.50.53 8443 v1.30.3 crio true true} ...
	I0729 14:37:58.958537 1039263 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-668123 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-668123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:37:58.958632 1039263 ssh_runner.go:195] Run: crio config
	I0729 14:37:59.004120 1039263 cni.go:84] Creating CNI manager for ""
	I0729 14:37:59.004146 1039263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:37:59.004163 1039263 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:37:59.004192 1039263 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.53 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-668123 NodeName:embed-certs-668123 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 14:37:59.004371 1039263 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-668123"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:37:59.004469 1039263 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 14:37:59.014796 1039263 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:37:59.014866 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:37:59.024575 1039263 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0729 14:37:59.040707 1039263 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 14:37:59.056693 1039263 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0729 14:37:59.073320 1039263 ssh_runner.go:195] Run: grep 192.168.50.53	control-plane.minikube.internal$ /etc/hosts
	I0729 14:37:59.077226 1039263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:37:59.091283 1039263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:37:59.221532 1039263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:37:59.239319 1039263 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123 for IP: 192.168.50.53
	I0729 14:37:59.239362 1039263 certs.go:194] generating shared ca certs ...
	I0729 14:37:59.239387 1039263 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:37:59.239604 1039263 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:37:59.239654 1039263 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:37:59.239667 1039263 certs.go:256] generating profile certs ...
	I0729 14:37:59.239818 1039263 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/client.key
	I0729 14:37:59.239922 1039263 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/apiserver.key.544998fe
	I0729 14:37:59.239969 1039263 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/proxy-client.key
	I0729 14:37:59.240137 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:37:59.240188 1039263 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:37:59.240202 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:37:59.240238 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:37:59.240280 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:37:59.240313 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:37:59.240385 1039263 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:37:59.241551 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:37:59.278842 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:37:59.305668 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:37:59.332314 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:37:59.377867 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 14:37:59.405592 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 14:37:59.438073 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:37:59.462130 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/embed-certs-668123/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 14:37:59.489158 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:37:59.511811 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:37:59.534728 1039263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:37:59.558680 1039263 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:37:59.575404 1039263 ssh_runner.go:195] Run: openssl version
	I0729 14:37:59.581518 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:37:59.592024 1039263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:37:59.596913 1039263 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:37:59.596983 1039263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:37:59.602973 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:37:59.613891 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:37:59.624053 1039263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:37:59.628881 1039263 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:37:59.628922 1039263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:37:59.634672 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:37:59.645513 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:37:59.656385 1039263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:37:59.661141 1039263 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:37:59.661209 1039263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:37:59.667169 1039263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:37:59.678240 1039263 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:37:59.683075 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:37:59.689013 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:37:59.694754 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:37:59.700865 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:37:59.706664 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:37:59.712457 1039263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:37:59.718347 1039263 kubeadm.go:392] StartCluster: {Name:embed-certs-668123 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-668123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:37:59.718460 1039263 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:37:59.718505 1039263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:37:59.756046 1039263 cri.go:89] found id: ""
	I0729 14:37:59.756143 1039263 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:37:59.766198 1039263 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 14:37:59.766222 1039263 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 14:37:59.766278 1039263 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 14:37:59.775664 1039263 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:37:59.776877 1039263 kubeconfig.go:125] found "embed-certs-668123" server: "https://192.168.50.53:8443"
	I0729 14:37:59.778802 1039263 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 14:37:59.787805 1039263 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.53
	I0729 14:37:59.787840 1039263 kubeadm.go:1160] stopping kube-system containers ...
	I0729 14:37:59.787854 1039263 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 14:37:59.787908 1039263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:37:59.828927 1039263 cri.go:89] found id: ""
	I0729 14:37:59.829016 1039263 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 14:37:59.844889 1039263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:37:59.854233 1039263 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:37:59.854264 1039263 kubeadm.go:157] found existing configuration files:
	
	I0729 14:37:59.854334 1039263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:37:59.863123 1039263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:37:59.863183 1039263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:37:59.872154 1039263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:37:59.880819 1039263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:37:59.880881 1039263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:37:59.889714 1039263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:37:59.898278 1039263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:37:59.898330 1039263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:37:59.907358 1039263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:37:59.916352 1039263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:37:59.916430 1039263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:37:59.925239 1039263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:37:59.934353 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:00.045086 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:00.793783 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:01.009839 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:01.080217 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:01.153377 1039263 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:38:01.153496 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:37:57.835202 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:57.835636 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:57.835674 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:57.835572 1040438 retry.go:31] will retry after 987.763901ms: waiting for machine to come up
	I0729 14:37:58.824975 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:37:58.825428 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:37:58.825457 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:37:58.825348 1040438 retry.go:31] will retry after 1.189429393s: waiting for machine to come up
	I0729 14:38:00.016130 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:00.016569 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:38:00.016604 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:38:00.016509 1040438 retry.go:31] will retry after 1.424039091s: waiting for machine to come up
	I0729 14:38:01.443138 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:01.443511 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:38:01.443540 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:38:01.443470 1040438 retry.go:31] will retry after 2.531090823s: waiting for machine to come up
	I0729 14:38:01.653905 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:02.153772 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:02.653590 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:02.669429 1039263 api_server.go:72] duration metric: took 1.516051254s to wait for apiserver process to appear ...
	I0729 14:38:02.669467 1039263 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:38:02.669495 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:05.531413 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:38:05.531451 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:38:05.531467 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:05.602173 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:38:05.602205 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:38:05.670522 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:05.680835 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:05.680861 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:06.170512 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:06.176052 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:06.176084 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:06.669679 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:06.674813 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:06.674854 1039263 api_server.go:103] status: https://192.168.50.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:07.170539 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:38:07.174573 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 200:
	ok
	I0729 14:38:07.180250 1039263 api_server.go:141] control plane version: v1.30.3
	I0729 14:38:07.180275 1039263 api_server.go:131] duration metric: took 4.510799806s to wait for apiserver health ...
	I0729 14:38:07.180284 1039263 cni.go:84] Creating CNI manager for ""
	I0729 14:38:07.180290 1039263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:38:07.181866 1039263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:38:03.976004 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:03.976514 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:38:03.976544 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:38:03.976474 1040438 retry.go:31] will retry after 3.356304099s: waiting for machine to come up
	I0729 14:38:07.335600 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:07.336031 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | unable to find current IP address of domain default-k8s-diff-port-751306 in network mk-default-k8s-diff-port-751306
	I0729 14:38:07.336086 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | I0729 14:38:07.335992 1040438 retry.go:31] will retry after 3.345416128s: waiting for machine to come up
	I0729 14:38:07.182966 1039263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:38:07.193166 1039263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:38:07.212801 1039263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:38:07.221297 1039263 system_pods.go:59] 8 kube-system pods found
	I0729 14:38:07.221331 1039263 system_pods.go:61] "coredns-7db6d8ff4d-6dhzz" [c680e565-fe93-4072-8fe8-6fd440ae5675] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 14:38:07.221340 1039263 system_pods.go:61] "etcd-embed-certs-668123" [3244d6a8-3aa2-406a-86fe-9770f5b8541a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 14:38:07.221347 1039263 system_pods.go:61] "kube-apiserver-embed-certs-668123" [a00570e4-b496-4083-b280-4125643e475e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 14:38:07.221352 1039263 system_pods.go:61] "kube-controller-manager-embed-certs-668123" [cec685e1-4d5f-4210-a115-e3766c962f07] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 14:38:07.221364 1039263 system_pods.go:61] "kube-proxy-2v79q" [e43e850d-b94e-467c-bf0f-0eac3828f54f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 14:38:07.221370 1039263 system_pods.go:61] "kube-scheduler-embed-certs-668123" [4037d948-faed-49c9-b321-6a4be51b9ea9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 14:38:07.221379 1039263 system_pods.go:61] "metrics-server-569cc877fc-5msnp" [eb9cd6f7-caf5-4b18-b0d6-0f01add839ce] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:38:07.221384 1039263 system_pods.go:61] "storage-provisioner" [ecdab0df-406c-4f3c-b8fe-34a48b7f1e0a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 14:38:07.221390 1039263 system_pods.go:74] duration metric: took 8.574498ms to wait for pod list to return data ...
	I0729 14:38:07.221397 1039263 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:38:07.224197 1039263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:38:07.224220 1039263 node_conditions.go:123] node cpu capacity is 2
	I0729 14:38:07.224231 1039263 node_conditions.go:105] duration metric: took 2.829585ms to run NodePressure ...
	I0729 14:38:07.224246 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:07.520049 1039263 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 14:38:07.524228 1039263 kubeadm.go:739] kubelet initialised
	I0729 14:38:07.524251 1039263 kubeadm.go:740] duration metric: took 4.174563ms waiting for restarted kubelet to initialise ...
	I0729 14:38:07.524262 1039263 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:38:07.529174 1039263 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:07.533534 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.533554 1039263 pod_ready.go:81] duration metric: took 4.355926ms for pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:07.533562 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.533567 1039263 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:07.537529 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "etcd-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.537550 1039263 pod_ready.go:81] duration metric: took 3.975082ms for pod "etcd-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:07.537561 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "etcd-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.537567 1039263 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:07.542299 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.542325 1039263 pod_ready.go:81] duration metric: took 4.747863ms for pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:07.542371 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.542390 1039263 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:07.616688 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.616725 1039263 pod_ready.go:81] duration metric: took 74.323327ms for pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:07.616740 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:07.616750 1039263 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2v79q" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:08.016334 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "kube-proxy-2v79q" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.016360 1039263 pod_ready.go:81] duration metric: took 399.599984ms for pod "kube-proxy-2v79q" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:08.016369 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "kube-proxy-2v79q" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.016374 1039263 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:08.416536 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.416571 1039263 pod_ready.go:81] duration metric: took 400.189243ms for pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:08.416585 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.416594 1039263 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:08.817526 1039263 pod_ready.go:97] node "embed-certs-668123" hosting pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.817561 1039263 pod_ready.go:81] duration metric: took 400.956263ms for pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace to be "Ready" ...
	E0729 14:38:08.817572 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-668123" hosting pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:08.817590 1039263 pod_ready.go:38] duration metric: took 1.293313082s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:38:08.817610 1039263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 14:38:08.829394 1039263 ops.go:34] apiserver oom_adj: -16
	I0729 14:38:08.829425 1039263 kubeadm.go:597] duration metric: took 9.06319609s to restartPrimaryControlPlane
	I0729 14:38:08.829436 1039263 kubeadm.go:394] duration metric: took 9.111098315s to StartCluster
	I0729 14:38:08.829457 1039263 settings.go:142] acquiring lock: {Name:mke61e73d7bb1a5bd9c2f4c9e9bba0a07b199ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:08.829548 1039263 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:38:08.831113 1039263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:08.831396 1039263 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:38:08.831441 1039263 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 14:38:08.831524 1039263 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-668123"
	I0729 14:38:08.831539 1039263 addons.go:69] Setting default-storageclass=true in profile "embed-certs-668123"
	I0729 14:38:08.831562 1039263 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-668123"
	W0729 14:38:08.831572 1039263 addons.go:243] addon storage-provisioner should already be in state true
	I0729 14:38:08.831561 1039263 addons.go:69] Setting metrics-server=true in profile "embed-certs-668123"
	I0729 14:38:08.831593 1039263 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-668123"
	I0729 14:38:08.831601 1039263 addons.go:234] Setting addon metrics-server=true in "embed-certs-668123"
	I0729 14:38:08.831609 1039263 host.go:66] Checking if "embed-certs-668123" exists ...
	W0729 14:38:08.831610 1039263 addons.go:243] addon metrics-server should already be in state true
	I0729 14:38:08.831632 1039263 config.go:182] Loaded profile config "embed-certs-668123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:38:08.831644 1039263 host.go:66] Checking if "embed-certs-668123" exists ...
	I0729 14:38:08.831916 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.831933 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.831918 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.831956 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.831945 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.831964 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.833223 1039263 out.go:177] * Verifying Kubernetes components...
	I0729 14:38:08.834403 1039263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:08.847231 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I0729 14:38:08.847362 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37467
	I0729 14:38:08.847398 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44737
	I0729 14:38:08.847797 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.847896 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.847904 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.848350 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.848371 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.848487 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.848507 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.848520 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.848540 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.848774 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.848854 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.848867 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.849010 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:38:08.849363 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.849363 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.849392 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.849416 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.851933 1039263 addons.go:234] Setting addon default-storageclass=true in "embed-certs-668123"
	W0729 14:38:08.851955 1039263 addons.go:243] addon default-storageclass should already be in state true
	I0729 14:38:08.851988 1039263 host.go:66] Checking if "embed-certs-668123" exists ...
	I0729 14:38:08.852284 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.852330 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.865255 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34389
	I0729 14:38:08.865707 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.865981 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36925
	I0729 14:38:08.866157 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.866183 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.866419 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.866531 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.866804 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:38:08.866895 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.866920 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.867272 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.867839 1039263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:08.867885 1039263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:08.868000 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46413
	I0729 14:38:08.868397 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.868741 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:38:08.868886 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.868903 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.869276 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.869501 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:38:08.870835 1039263 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 14:38:08.871289 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:38:08.872267 1039263 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 14:38:08.872289 1039263 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 14:38:08.872306 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:38:08.873165 1039263 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:08.874593 1039263 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:38:08.874616 1039263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 14:38:08.874635 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:38:08.875061 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.875572 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:38:08.875605 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.875815 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:38:08.876012 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:38:08.876208 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:38:08.876370 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:38:08.877997 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.878394 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:38:08.878423 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.878555 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:38:08.878726 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:38:08.878889 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:38:08.879002 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:38:08.890720 1039263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I0729 14:38:08.891092 1039263 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:08.891619 1039263 main.go:141] libmachine: Using API Version  1
	I0729 14:38:08.891638 1039263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:08.891972 1039263 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:08.892184 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetState
	I0729 14:38:08.893577 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .DriverName
	I0729 14:38:08.893817 1039263 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 14:38:08.893840 1039263 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 14:38:08.893859 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHHostname
	I0729 14:38:08.896843 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.897302 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:92:a4", ip: ""} in network mk-embed-certs-668123: {Iface:virbr2 ExpiryTime:2024-07-29 15:37:44 +0000 UTC Type:0 Mac:52:54:00:a3:92:a4 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:embed-certs-668123 Clientid:01:52:54:00:a3:92:a4}
	I0729 14:38:08.897320 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | domain embed-certs-668123 has defined IP address 192.168.50.53 and MAC address 52:54:00:a3:92:a4 in network mk-embed-certs-668123
	I0729 14:38:08.897464 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHPort
	I0729 14:38:08.897618 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHKeyPath
	I0729 14:38:08.897866 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .GetSSHUsername
	I0729 14:38:08.897966 1039263 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/embed-certs-668123/id_rsa Username:docker}
	I0729 14:38:09.019001 1039263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:38:09.038038 1039263 node_ready.go:35] waiting up to 6m0s for node "embed-certs-668123" to be "Ready" ...
	I0729 14:38:09.097896 1039263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:38:09.101844 1039263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 14:38:09.229339 1039263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 14:38:09.229360 1039263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 14:38:09.317591 1039263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 14:38:09.317625 1039263 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 14:38:09.370444 1039263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:38:09.370490 1039263 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 14:38:09.407869 1039263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:38:10.014739 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.014767 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.014873 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.014897 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.015112 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Closing plugin on server side
	I0729 14:38:10.015150 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.015157 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.015166 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.015174 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.015284 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.015297 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.015306 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.015313 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.015384 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.015413 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.015395 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Closing plugin on server side
	I0729 14:38:10.015611 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.015641 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.024010 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.024031 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.024299 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.024318 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.024343 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Closing plugin on server side
	I0729 14:38:10.233873 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.233903 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.234247 1039263 main.go:141] libmachine: (embed-certs-668123) DBG | Closing plugin on server side
	I0729 14:38:10.234260 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.234275 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.234292 1039263 main.go:141] libmachine: Making call to close driver server
	I0729 14:38:10.234301 1039263 main.go:141] libmachine: (embed-certs-668123) Calling .Close
	I0729 14:38:10.234546 1039263 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:38:10.234563 1039263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:38:10.234574 1039263 addons.go:475] Verifying addon metrics-server=true in "embed-certs-668123"
	I0729 14:38:10.236215 1039263 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 14:38:10.237377 1039263 addons.go:510] duration metric: took 1.405942032s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 14:38:11.042263 1039263 node_ready.go:53] node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:12.129080 1039759 start.go:364] duration metric: took 3m18.14725367s to acquireMachinesLock for "old-k8s-version-360866"
	I0729 14:38:12.129155 1039759 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:38:12.129166 1039759 fix.go:54] fixHost starting: 
	I0729 14:38:12.129715 1039759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:12.129752 1039759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:12.146596 1039759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34517
	I0729 14:38:12.147101 1039759 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:12.147554 1039759 main.go:141] libmachine: Using API Version  1
	I0729 14:38:12.147581 1039759 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:12.147871 1039759 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:12.148094 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:12.148293 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetState
	I0729 14:38:12.149880 1039759 fix.go:112] recreateIfNeeded on old-k8s-version-360866: state=Stopped err=<nil>
	I0729 14:38:12.149918 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	W0729 14:38:12.150120 1039759 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:38:12.152003 1039759 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-360866" ...
	I0729 14:38:10.683699 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.684108 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Found IP for machine: 192.168.72.233
	I0729 14:38:10.684148 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has current primary IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.684161 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Reserving static IP address...
	I0729 14:38:10.684506 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-751306", mac: "52:54:00:9f:b9:23", ip: "192.168.72.233"} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.684540 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | skip adding static IP to network mk-default-k8s-diff-port-751306 - found existing host DHCP lease matching {name: "default-k8s-diff-port-751306", mac: "52:54:00:9f:b9:23", ip: "192.168.72.233"}
	I0729 14:38:10.684558 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Reserved static IP address: 192.168.72.233
	I0729 14:38:10.684581 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Waiting for SSH to be available...
	I0729 14:38:10.684600 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Getting to WaitForSSH function...
	I0729 14:38:10.686336 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.686684 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.686713 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.686825 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Using SSH client type: external
	I0729 14:38:10.686856 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa (-rw-------)
	I0729 14:38:10.686894 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.233 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:38:10.686904 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | About to run SSH command:
	I0729 14:38:10.686921 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | exit 0
	I0729 14:38:10.808536 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | SSH cmd err, output: <nil>: 
	I0729 14:38:10.808965 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetConfigRaw
	I0729 14:38:10.809613 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetIP
	I0729 14:38:10.812200 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.812590 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.812625 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.812862 1039440 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/config.json ...
	I0729 14:38:10.813089 1039440 machine.go:94] provisionDockerMachine start ...
	I0729 14:38:10.813110 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:10.813322 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:10.815607 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.815933 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.815962 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.816113 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:10.816287 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:10.816450 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:10.816623 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:10.816838 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:10.817167 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:10.817184 1039440 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:38:10.916864 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 14:38:10.916908 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetMachineName
	I0729 14:38:10.917215 1039440 buildroot.go:166] provisioning hostname "default-k8s-diff-port-751306"
	I0729 14:38:10.917249 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetMachineName
	I0729 14:38:10.917478 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:10.919961 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.920339 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:10.920363 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:10.920471 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:10.920660 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:10.920842 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:10.920991 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:10.921145 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:10.921358 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:10.921377 1039440 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-751306 && echo "default-k8s-diff-port-751306" | sudo tee /etc/hostname
	I0729 14:38:11.034826 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-751306
	
	I0729 14:38:11.034859 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.037494 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.037836 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.037870 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.038068 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:11.038274 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.038410 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.038575 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:11.038736 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:11.038971 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:11.038998 1039440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-751306' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-751306/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-751306' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:38:11.146350 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:38:11.146391 1039440 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:38:11.146449 1039440 buildroot.go:174] setting up certificates
	I0729 14:38:11.146463 1039440 provision.go:84] configureAuth start
	I0729 14:38:11.146478 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetMachineName
	I0729 14:38:11.146842 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetIP
	I0729 14:38:11.149492 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.149766 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.149796 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.149927 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.152449 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.152735 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.152785 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.152956 1039440 provision.go:143] copyHostCerts
	I0729 14:38:11.153010 1039440 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:38:11.153021 1039440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:38:11.153074 1039440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:38:11.153172 1039440 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:38:11.153180 1039440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:38:11.153198 1039440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:38:11.153253 1039440 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:38:11.153260 1039440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:38:11.153276 1039440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:38:11.153324 1039440 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-751306 san=[127.0.0.1 192.168.72.233 default-k8s-diff-port-751306 localhost minikube]
	I0729 14:38:11.489907 1039440 provision.go:177] copyRemoteCerts
	I0729 14:38:11.489990 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:38:11.490028 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.492487 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.492801 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.492832 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.492992 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:11.493220 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.493467 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:11.493611 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:38:11.574475 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:38:11.598182 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:38:11.622809 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 14:38:11.646533 1039440 provision.go:87] duration metric: took 500.054139ms to configureAuth
	I0729 14:38:11.646563 1039440 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:38:11.646742 1039440 config.go:182] Loaded profile config "default-k8s-diff-port-751306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:38:11.646822 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.649260 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.649581 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.649616 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.649729 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:11.649934 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.650088 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.650274 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:11.650436 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:11.650610 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:11.650628 1039440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:38:11.906877 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:38:11.906918 1039440 machine.go:97] duration metric: took 1.093811728s to provisionDockerMachine
	I0729 14:38:11.906936 1039440 start.go:293] postStartSetup for "default-k8s-diff-port-751306" (driver="kvm2")
	I0729 14:38:11.906951 1039440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:38:11.906977 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:11.907366 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:38:11.907407 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:11.910366 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.910725 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:11.910748 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:11.910913 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:11.911162 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:11.911323 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:11.911529 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:38:11.992133 1039440 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:38:11.996426 1039440 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:38:11.996456 1039440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:38:11.996544 1039440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:38:11.996641 1039440 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:38:11.996747 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:38:12.006165 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:12.029591 1039440 start.go:296] duration metric: took 122.613174ms for postStartSetup
	I0729 14:38:12.029643 1039440 fix.go:56] duration metric: took 18.376148792s for fixHost
	I0729 14:38:12.029670 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:12.032299 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.032667 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:12.032731 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.032901 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:12.033104 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:12.033260 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:12.033372 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:12.033510 1039440 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:12.033679 1039440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.233 22 <nil> <nil>}
	I0729 14:38:12.033688 1039440 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:38:12.128889 1039440 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263892.107886376
	
	I0729 14:38:12.128917 1039440 fix.go:216] guest clock: 1722263892.107886376
	I0729 14:38:12.128926 1039440 fix.go:229] Guest: 2024-07-29 14:38:12.107886376 +0000 UTC Remote: 2024-07-29 14:38:12.029648961 +0000 UTC m=+239.632909800 (delta=78.237415ms)
	I0729 14:38:12.128955 1039440 fix.go:200] guest clock delta is within tolerance: 78.237415ms
	I0729 14:38:12.128961 1039440 start.go:83] releasing machines lock for "default-k8s-diff-port-751306", held for 18.475504041s
	I0729 14:38:12.128995 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:12.129301 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetIP
	I0729 14:38:12.132025 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.132374 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:12.132401 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.132566 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:12.133087 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:12.133273 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:38:12.133349 1039440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:38:12.133404 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:12.133513 1039440 ssh_runner.go:195] Run: cat /version.json
	I0729 14:38:12.133534 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:38:12.136121 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.136149 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.136523 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:12.136577 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:12.136607 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.136624 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:12.136716 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:12.136793 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:38:12.136917 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:12.136973 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:38:12.137088 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:12.137165 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:38:12.137292 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:38:12.137232 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:38:12.233842 1039440 ssh_runner.go:195] Run: systemctl --version
	I0729 14:38:12.240082 1039440 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:38:12.388404 1039440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:38:12.395038 1039440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:38:12.395127 1039440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:38:12.416590 1039440 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:38:12.416618 1039440 start.go:495] detecting cgroup driver to use...
	I0729 14:38:12.416682 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:38:12.437863 1039440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:38:12.453458 1039440 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:38:12.453508 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:38:12.467657 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:38:12.482328 1039440 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:38:12.610786 1039440 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:38:12.774787 1039440 docker.go:233] disabling docker service ...
	I0729 14:38:12.774861 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:38:12.790091 1039440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:38:12.803914 1039440 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:38:12.933894 1039440 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:38:13.052159 1039440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:38:13.069309 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:38:13.089959 1039440 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 14:38:13.090014 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.102668 1039440 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:38:13.102741 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.113634 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.124374 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.135488 1039440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:38:13.147171 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.159757 1039440 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.178620 1039440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:13.189326 1039440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:38:13.200007 1039440 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:38:13.200067 1039440 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:38:13.213063 1039440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:38:13.226044 1039440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:13.360685 1039440 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:38:13.508473 1039440 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:38:13.508556 1039440 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:38:13.513547 1039440 start.go:563] Will wait 60s for crictl version
	I0729 14:38:13.513619 1039440 ssh_runner.go:195] Run: which crictl
	I0729 14:38:13.518528 1039440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:38:13.567103 1039440 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:38:13.567180 1039440 ssh_runner.go:195] Run: crio --version
	I0729 14:38:13.603837 1039440 ssh_runner.go:195] Run: crio --version
	I0729 14:38:13.633583 1039440 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 14:38:12.153214 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .Start
	I0729 14:38:12.153408 1039759 main.go:141] libmachine: (old-k8s-version-360866) Ensuring networks are active...
	I0729 14:38:12.154141 1039759 main.go:141] libmachine: (old-k8s-version-360866) Ensuring network default is active
	I0729 14:38:12.154590 1039759 main.go:141] libmachine: (old-k8s-version-360866) Ensuring network mk-old-k8s-version-360866 is active
	I0729 14:38:12.154970 1039759 main.go:141] libmachine: (old-k8s-version-360866) Getting domain xml...
	I0729 14:38:12.155733 1039759 main.go:141] libmachine: (old-k8s-version-360866) Creating domain...
	I0729 14:38:12.526504 1039759 main.go:141] libmachine: (old-k8s-version-360866) Waiting to get IP...
	I0729 14:38:12.527560 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:12.528068 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:12.528147 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:12.528048 1040622 retry.go:31] will retry after 240.079974ms: waiting for machine to come up
	I0729 14:38:12.769388 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:12.769881 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:12.769910 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:12.769829 1040622 retry.go:31] will retry after 271.200632ms: waiting for machine to come up
	I0729 14:38:13.042584 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:13.043069 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:13.043101 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:13.043049 1040622 retry.go:31] will retry after 464.725959ms: waiting for machine to come up
	I0729 14:38:13.509830 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:13.510400 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:13.510434 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:13.510350 1040622 retry.go:31] will retry after 416.316047ms: waiting for machine to come up
	I0729 14:38:13.042877 1039263 node_ready.go:53] node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:15.051282 1039263 node_ready.go:53] node "embed-certs-668123" has status "Ready":"False"
	I0729 14:38:13.635092 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetIP
	I0729 14:38:13.638202 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:13.638665 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:38:13.638691 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:38:13.638933 1039440 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 14:38:13.642960 1039440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:13.656098 1039440 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-751306 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-751306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.233 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:38:13.656208 1039440 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 14:38:13.656255 1039440 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:13.697398 1039440 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 14:38:13.697475 1039440 ssh_runner.go:195] Run: which lz4
	I0729 14:38:13.701632 1039440 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 14:38:13.707129 1039440 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 14:38:13.707162 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 14:38:15.218414 1039440 crio.go:462] duration metric: took 1.516807674s to copy over tarball
	I0729 14:38:15.218505 1039440 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 14:38:13.927885 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:13.928343 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:13.928373 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:13.928307 1040622 retry.go:31] will retry after 659.670364ms: waiting for machine to come up
	I0729 14:38:14.589644 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:14.590143 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:14.590172 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:14.590031 1040622 retry.go:31] will retry after 738.020335ms: waiting for machine to come up
	I0729 14:38:15.330093 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:15.330603 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:15.330633 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:15.330553 1040622 retry.go:31] will retry after 1.13067902s: waiting for machine to come up
	I0729 14:38:16.462554 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:16.463002 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:16.463031 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:16.462977 1040622 retry.go:31] will retry after 1.342785853s: waiting for machine to come up
	I0729 14:38:17.806889 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:17.807333 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:17.807365 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:17.807266 1040622 retry.go:31] will retry after 1.804812934s: waiting for machine to come up
	I0729 14:38:16.550848 1039263 node_ready.go:49] node "embed-certs-668123" has status "Ready":"True"
	I0729 14:38:16.550880 1039263 node_ready.go:38] duration metric: took 7.512808712s for node "embed-certs-668123" to be "Ready" ...
	I0729 14:38:16.550895 1039263 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:38:16.563220 1039263 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:16.570054 1039263 pod_ready.go:92] pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:16.570080 1039263 pod_ready.go:81] duration metric: took 6.832939ms for pod "coredns-7db6d8ff4d-6dhzz" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:16.570091 1039263 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:19.207981 1039263 pod_ready.go:102] pod "etcd-embed-certs-668123" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:17.498961 1039440 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.280415291s)
	I0729 14:38:17.498997 1039440 crio.go:469] duration metric: took 2.280548689s to extract the tarball
	I0729 14:38:17.499008 1039440 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 14:38:17.537972 1039440 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:17.583582 1039440 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 14:38:17.583609 1039440 cache_images.go:84] Images are preloaded, skipping loading
	I0729 14:38:17.583617 1039440 kubeadm.go:934] updating node { 192.168.72.233 8444 v1.30.3 crio true true} ...
	I0729 14:38:17.583719 1039440 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-751306 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-751306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:38:17.583789 1039440 ssh_runner.go:195] Run: crio config
	I0729 14:38:17.637202 1039440 cni.go:84] Creating CNI manager for ""
	I0729 14:38:17.637230 1039440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:38:17.637243 1039440 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:38:17.637272 1039440 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.233 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-751306 NodeName:default-k8s-diff-port-751306 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.233 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 14:38:17.637451 1039440 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.233
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-751306"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.233
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.233"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:38:17.637528 1039440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 14:38:17.650173 1039440 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:38:17.650259 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:38:17.661790 1039440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0729 14:38:17.680720 1039440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 14:38:17.700420 1039440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0729 14:38:17.723134 1039440 ssh_runner.go:195] Run: grep 192.168.72.233	control-plane.minikube.internal$ /etc/hosts
	I0729 14:38:17.727510 1039440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.233	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:17.741033 1039440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:17.889833 1039440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:38:17.910486 1039440 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306 for IP: 192.168.72.233
	I0729 14:38:17.910540 1039440 certs.go:194] generating shared ca certs ...
	I0729 14:38:17.910565 1039440 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:17.910763 1039440 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:38:17.910821 1039440 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:38:17.910833 1039440 certs.go:256] generating profile certs ...
	I0729 14:38:17.910941 1039440 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/client.key
	I0729 14:38:17.911003 1039440 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/apiserver.key.811a3f6d
	I0729 14:38:17.911105 1039440 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/proxy-client.key
	I0729 14:38:17.911271 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:38:17.911315 1039440 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:38:17.911329 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:38:17.911362 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:38:17.911393 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:38:17.911426 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:38:17.911478 1039440 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:17.912301 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:38:17.948102 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:38:17.984122 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:38:18.019932 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:38:18.062310 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 14:38:18.093176 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 14:38:18.124016 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:38:18.151933 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/default-k8s-diff-port-751306/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 14:38:18.179714 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:38:18.203414 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:38:18.233286 1039440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:38:18.262871 1039440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:38:18.283064 1039440 ssh_runner.go:195] Run: openssl version
	I0729 14:38:18.289016 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:38:18.299409 1039440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:38:18.304053 1039440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:38:18.304115 1039440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:38:18.309976 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:38:18.321472 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:38:18.331916 1039440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:38:18.336822 1039440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:38:18.336881 1039440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:38:18.342762 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:38:18.353478 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:38:18.364299 1039440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:18.369024 1039440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:18.369076 1039440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:18.376534 1039440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:38:18.387360 1039440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:38:18.392392 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:38:18.398520 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:38:18.404397 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:38:18.410922 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:38:18.417193 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:38:18.423808 1039440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:38:18.433345 1039440 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-751306 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-751306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.233 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:38:18.433463 1039440 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:38:18.433582 1039440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:38:18.476749 1039440 cri.go:89] found id: ""
	I0729 14:38:18.476834 1039440 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:38:18.488548 1039440 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 14:38:18.488570 1039440 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 14:38:18.488628 1039440 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 14:38:18.499081 1039440 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:38:18.500064 1039440 kubeconfig.go:125] found "default-k8s-diff-port-751306" server: "https://192.168.72.233:8444"
	I0729 14:38:18.502130 1039440 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 14:38:18.511589 1039440 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.233
	I0729 14:38:18.511631 1039440 kubeadm.go:1160] stopping kube-system containers ...
	I0729 14:38:18.511646 1039440 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 14:38:18.511698 1039440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:38:18.559691 1039440 cri.go:89] found id: ""
	I0729 14:38:18.559779 1039440 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 14:38:18.576217 1039440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:38:18.586031 1039440 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:38:18.586057 1039440 kubeadm.go:157] found existing configuration files:
	
	I0729 14:38:18.586110 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 14:38:18.595032 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:38:18.595096 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:38:18.604320 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 14:38:18.613996 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:38:18.614053 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:38:18.623345 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 14:38:18.631898 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:38:18.631943 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:38:18.641303 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 14:38:18.649849 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:38:18.649907 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:38:18.659657 1039440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:38:18.668914 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:18.782351 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:19.902413 1039440 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.120025721s)
	I0729 14:38:19.902451 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:20.120455 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:20.206099 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:20.293738 1039440 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:38:20.293850 1039440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:20.794840 1039440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:21.294958 1039440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:21.313567 1039440 api_server.go:72] duration metric: took 1.019826572s to wait for apiserver process to appear ...
	I0729 14:38:21.313600 1039440 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:38:21.313625 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:21.314152 1039440 api_server.go:269] stopped: https://192.168.72.233:8444/healthz: Get "https://192.168.72.233:8444/healthz": dial tcp 192.168.72.233:8444: connect: connection refused
	I0729 14:38:21.813935 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:19.613474 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:19.613801 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:19.613830 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:19.613749 1040622 retry.go:31] will retry after 1.449593132s: waiting for machine to come up
	I0729 14:38:21.064774 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:21.065382 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:21.065405 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:21.065314 1040622 retry.go:31] will retry after 1.807508073s: waiting for machine to come up
	I0729 14:38:22.874485 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:22.874896 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:22.874925 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:22.874844 1040622 retry.go:31] will retry after 3.036719557s: waiting for machine to come up
	I0729 14:38:21.578125 1039263 pod_ready.go:92] pod "etcd-embed-certs-668123" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.578152 1039263 pod_ready.go:81] duration metric: took 5.008051755s for pod "etcd-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.578164 1039263 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.584521 1039263 pod_ready.go:92] pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.584544 1039263 pod_ready.go:81] duration metric: took 6.372252ms for pod "kube-apiserver-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.584558 1039263 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.590245 1039263 pod_ready.go:92] pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.590269 1039263 pod_ready.go:81] duration metric: took 5.702853ms for pod "kube-controller-manager-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.590280 1039263 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2v79q" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.594576 1039263 pod_ready.go:92] pod "kube-proxy-2v79q" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.594602 1039263 pod_ready.go:81] duration metric: took 4.314692ms for pod "kube-proxy-2v79q" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.594614 1039263 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.787339 1039263 pod_ready.go:92] pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:21.787379 1039263 pod_ready.go:81] duration metric: took 192.756548ms for pod "kube-scheduler-embed-certs-668123" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:21.787399 1039263 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:23.795588 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:24.561135 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:38:24.561176 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:38:24.561195 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:24.635519 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:24.635550 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:24.813755 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:24.817972 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:24.818003 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:25.314643 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:25.320059 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:25.320094 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:25.814758 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:25.820578 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:25.820613 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:26.314798 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:26.319346 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:26.319384 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:26.813907 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:26.821176 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:26.821208 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:27.314614 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:27.319335 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:38:27.319361 1039440 api_server.go:103] status: https://192.168.72.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:38:27.814188 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:38:27.819010 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 200:
	ok
	I0729 14:38:27.826057 1039440 api_server.go:141] control plane version: v1.30.3
	I0729 14:38:27.826082 1039440 api_server.go:131] duration metric: took 6.512474877s to wait for apiserver health ...
	I0729 14:38:27.826091 1039440 cni.go:84] Creating CNI manager for ""
	I0729 14:38:27.826098 1039440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:38:27.827698 1039440 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:38:25.913642 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:25.914139 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | unable to find current IP address of domain old-k8s-version-360866 in network mk-old-k8s-version-360866
	I0729 14:38:25.914166 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | I0729 14:38:25.914099 1040622 retry.go:31] will retry after 3.839238383s: waiting for machine to come up
	I0729 14:38:26.293618 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:28.294115 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:30.296010 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:31.361688 1038758 start.go:364] duration metric: took 52.182622805s to acquireMachinesLock for "no-preload-603534"
	I0729 14:38:31.361756 1038758 start.go:96] Skipping create...Using existing machine configuration
	I0729 14:38:31.361765 1038758 fix.go:54] fixHost starting: 
	I0729 14:38:31.362279 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:38:31.362319 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:38:31.380259 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34959
	I0729 14:38:31.380783 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:38:31.381320 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:38:31.381349 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:38:31.381649 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:38:31.381848 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:31.381989 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:38:31.383537 1038758 fix.go:112] recreateIfNeeded on no-preload-603534: state=Stopped err=<nil>
	I0729 14:38:31.383561 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	W0729 14:38:31.383739 1038758 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 14:38:31.385496 1038758 out.go:177] * Restarting existing kvm2 VM for "no-preload-603534" ...
	I0729 14:38:31.386878 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Start
	I0729 14:38:31.387071 1038758 main.go:141] libmachine: (no-preload-603534) Ensuring networks are active...
	I0729 14:38:31.387821 1038758 main.go:141] libmachine: (no-preload-603534) Ensuring network default is active
	I0729 14:38:31.388141 1038758 main.go:141] libmachine: (no-preload-603534) Ensuring network mk-no-preload-603534 is active
	I0729 14:38:31.388649 1038758 main.go:141] libmachine: (no-preload-603534) Getting domain xml...
	I0729 14:38:31.391807 1038758 main.go:141] libmachine: (no-preload-603534) Creating domain...
	I0729 14:38:27.829109 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:38:27.839810 1039440 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:38:27.858724 1039440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:38:27.868075 1039440 system_pods.go:59] 8 kube-system pods found
	I0729 14:38:27.868112 1039440 system_pods.go:61] "coredns-7db6d8ff4d-m6dlw" [7ce45b48-f04d-4167-8a6e-643b2fb3c4f0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 14:38:27.868121 1039440 system_pods.go:61] "etcd-default-k8s-diff-port-751306" [7ccadfd7-8b68-45c0-9670-af97b90d35d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 14:38:27.868128 1039440 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-751306" [5e8c8e17-28db-499c-a940-e67d92b28bfd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 14:38:27.868134 1039440 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-751306" [a2d31d58-d8d9-4070-96af-0d1af763d0b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 14:38:27.868140 1039440 system_pods.go:61] "kube-proxy-p6dv5" [c44edf0a-f608-49f2-ab53-7ffbcdf13b5e] Running
	I0729 14:38:27.868146 1039440 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-751306" [b87ee044-f43f-4aa7-94b3-4f2ad2213ce9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 14:38:27.868152 1039440 system_pods.go:61] "metrics-server-569cc877fc-gmz64" [296e883c-7394-4004-a25f-e93b4be52d46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:38:27.868156 1039440 system_pods.go:61] "storage-provisioner" [ec3b78f1-96a3-47b2-958d-82258a074634] Running
	I0729 14:38:27.868165 1039440 system_pods.go:74] duration metric: took 9.405484ms to wait for pod list to return data ...
	I0729 14:38:27.868173 1039440 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:38:27.871538 1039440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:38:27.871563 1039440 node_conditions.go:123] node cpu capacity is 2
	I0729 14:38:27.871575 1039440 node_conditions.go:105] duration metric: took 3.397306ms to run NodePressure ...
	I0729 14:38:27.871596 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:28.143890 1039440 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 14:38:28.148855 1039440 kubeadm.go:739] kubelet initialised
	I0729 14:38:28.148880 1039440 kubeadm.go:740] duration metric: took 4.952057ms waiting for restarted kubelet to initialise ...
	I0729 14:38:28.148891 1039440 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:38:28.154636 1039440 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-m6dlw" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:30.161265 1039440 pod_ready.go:102] pod "coredns-7db6d8ff4d-m6dlw" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:31.161979 1039440 pod_ready.go:92] pod "coredns-7db6d8ff4d-m6dlw" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:31.162005 1039440 pod_ready.go:81] duration metric: took 3.007344998s for pod "coredns-7db6d8ff4d-m6dlw" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:31.162015 1039440 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:29.755060 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.755512 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has current primary IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.755524 1039759 main.go:141] libmachine: (old-k8s-version-360866) Found IP for machine: 192.168.39.71
	I0729 14:38:29.755536 1039759 main.go:141] libmachine: (old-k8s-version-360866) Reserving static IP address...
	I0729 14:38:29.755975 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "old-k8s-version-360866", mac: "52:54:00:18:de:25", ip: "192.168.39.71"} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.756008 1039759 main.go:141] libmachine: (old-k8s-version-360866) Reserved static IP address: 192.168.39.71
	I0729 14:38:29.756035 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | skip adding static IP to network mk-old-k8s-version-360866 - found existing host DHCP lease matching {name: "old-k8s-version-360866", mac: "52:54:00:18:de:25", ip: "192.168.39.71"}
	I0729 14:38:29.756048 1039759 main.go:141] libmachine: (old-k8s-version-360866) Waiting for SSH to be available...
	I0729 14:38:29.756067 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | Getting to WaitForSSH function...
	I0729 14:38:29.758527 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.758899 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.758944 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.759003 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | Using SSH client type: external
	I0729 14:38:29.759024 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa (-rw-------)
	I0729 14:38:29.759058 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.71 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:38:29.759070 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | About to run SSH command:
	I0729 14:38:29.759083 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | exit 0
	I0729 14:38:29.884425 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | SSH cmd err, output: <nil>: 
	I0729 14:38:29.884833 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetConfigRaw
	I0729 14:38:29.885450 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:29.887929 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.888241 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.888294 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.888624 1039759 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/config.json ...
	I0729 14:38:29.888895 1039759 machine.go:94] provisionDockerMachine start ...
	I0729 14:38:29.888919 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:29.889221 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:29.891654 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.892013 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.892038 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.892163 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:29.892350 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.892598 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.892764 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:29.892968 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:29.893158 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:29.893169 1039759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:38:29.993529 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 14:38:29.993564 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetMachineName
	I0729 14:38:29.993859 1039759 buildroot.go:166] provisioning hostname "old-k8s-version-360866"
	I0729 14:38:29.993893 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetMachineName
	I0729 14:38:29.994074 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:29.996882 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.997279 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:29.997308 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:29.997537 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:29.997699 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.997856 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:29.997976 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:29.998206 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:29.998412 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:29.998429 1039759 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-360866 && echo "old-k8s-version-360866" | sudo tee /etc/hostname
	I0729 14:38:30.115298 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-360866
	
	I0729 14:38:30.115331 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.118349 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.118763 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.118793 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.119029 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:30.119203 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.119356 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.119561 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:30.119772 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:30.119976 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:30.120019 1039759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-360866' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-360866/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-360866' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:38:30.229987 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:38:30.230017 1039759 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:38:30.230059 1039759 buildroot.go:174] setting up certificates
	I0729 14:38:30.230070 1039759 provision.go:84] configureAuth start
	I0729 14:38:30.230090 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetMachineName
	I0729 14:38:30.230436 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:30.233150 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.233501 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.233533 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.233719 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.236157 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.236494 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.236534 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.236713 1039759 provision.go:143] copyHostCerts
	I0729 14:38:30.236786 1039759 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:38:30.236797 1039759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:38:30.236856 1039759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:38:30.236976 1039759 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:38:30.236986 1039759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:38:30.237006 1039759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:38:30.237071 1039759 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:38:30.237078 1039759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:38:30.237095 1039759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:38:30.237153 1039759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-360866 san=[127.0.0.1 192.168.39.71 localhost minikube old-k8s-version-360866]
	I0729 14:38:30.680859 1039759 provision.go:177] copyRemoteCerts
	I0729 14:38:30.680933 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:38:30.680970 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.683890 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.684229 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.684262 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.684430 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:30.684634 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.684822 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:30.684973 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:30.770659 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:38:30.799011 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 14:38:30.825536 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:38:30.850751 1039759 provision.go:87] duration metric: took 620.664228ms to configureAuth
	I0729 14:38:30.850795 1039759 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:38:30.850998 1039759 config.go:182] Loaded profile config "old-k8s-version-360866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 14:38:30.851072 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:30.853735 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.854065 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:30.854102 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:30.854197 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:30.854408 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.854559 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:30.854717 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:30.854961 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:30.855169 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:30.855187 1039759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:38:31.119354 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:38:31.119386 1039759 machine.go:97] duration metric: took 1.230472142s to provisionDockerMachine
	I0729 14:38:31.119401 1039759 start.go:293] postStartSetup for "old-k8s-version-360866" (driver="kvm2")
	I0729 14:38:31.119415 1039759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:38:31.119456 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.119885 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:38:31.119926 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.123196 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.123576 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.123607 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.123826 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.124053 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.124276 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.124469 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:31.208607 1039759 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:38:31.213173 1039759 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:38:31.213206 1039759 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:38:31.213268 1039759 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:38:31.213352 1039759 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:38:31.213454 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:38:31.225256 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:31.253156 1039759 start.go:296] duration metric: took 133.735669ms for postStartSetup
	I0729 14:38:31.253208 1039759 fix.go:56] duration metric: took 19.124042428s for fixHost
	I0729 14:38:31.253237 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.256005 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.256340 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.256375 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.256535 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.256732 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.256927 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.257075 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.257272 1039759 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:31.257445 1039759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0729 14:38:31.257455 1039759 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:38:31.361488 1039759 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263911.340365932
	
	I0729 14:38:31.361512 1039759 fix.go:216] guest clock: 1722263911.340365932
	I0729 14:38:31.361519 1039759 fix.go:229] Guest: 2024-07-29 14:38:31.340365932 +0000 UTC Remote: 2024-07-29 14:38:31.253213714 +0000 UTC m=+217.413183116 (delta=87.152218ms)
	I0729 14:38:31.361572 1039759 fix.go:200] guest clock delta is within tolerance: 87.152218ms
	I0729 14:38:31.361583 1039759 start.go:83] releasing machines lock for "old-k8s-version-360866", held for 19.232453759s
	I0729 14:38:31.361611 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.361921 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:31.364981 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.365412 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.365441 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.365648 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.366227 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.366482 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .DriverName
	I0729 14:38:31.366583 1039759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:38:31.366644 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.366761 1039759 ssh_runner.go:195] Run: cat /version.json
	I0729 14:38:31.366797 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHHostname
	I0729 14:38:31.369658 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.369699 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.370051 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.370081 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:31.370105 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.370125 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:31.370309 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.370325 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHPort
	I0729 14:38:31.370567 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.370568 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHKeyPath
	I0729 14:38:31.370773 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.370809 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetSSHUsername
	I0729 14:38:31.370958 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:31.370957 1039759 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/old-k8s-version-360866/id_rsa Username:docker}
	I0729 14:38:31.472108 1039759 ssh_runner.go:195] Run: systemctl --version
	I0729 14:38:31.478939 1039759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:38:31.630720 1039759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:38:31.637768 1039759 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:38:31.637874 1039759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:38:31.655476 1039759 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:38:31.655504 1039759 start.go:495] detecting cgroup driver to use...
	I0729 14:38:31.655584 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:38:31.679387 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:38:31.704260 1039759 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:38:31.704318 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:38:31.727875 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:38:31.743197 1039759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:38:31.867502 1039759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:38:32.035088 1039759 docker.go:233] disabling docker service ...
	I0729 14:38:32.035169 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:38:32.050118 1039759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:38:32.064828 1039759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:38:32.202938 1039759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:38:32.333330 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:38:32.348845 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:38:32.369848 1039759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 14:38:32.369923 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.381787 1039759 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:38:32.381893 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.394331 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.405323 1039759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:32.417259 1039759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:38:32.428997 1039759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:38:32.440934 1039759 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:38:32.441003 1039759 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:38:32.454949 1039759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:38:32.466042 1039759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:32.596308 1039759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:38:32.762548 1039759 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:38:32.762632 1039759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:38:32.768336 1039759 start.go:563] Will wait 60s for crictl version
	I0729 14:38:32.768447 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:32.772850 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:38:32.829827 1039759 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:38:32.829936 1039759 ssh_runner.go:195] Run: crio --version
	I0729 14:38:32.863269 1039759 ssh_runner.go:195] Run: crio --version
	I0729 14:38:32.897768 1039759 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 14:38:32.899209 1039759 main.go:141] libmachine: (old-k8s-version-360866) Calling .GetIP
	I0729 14:38:32.902257 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:32.902649 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:de:25", ip: ""} in network mk-old-k8s-version-360866: {Iface:virbr3 ExpiryTime:2024-07-29 15:38:23 +0000 UTC Type:0 Mac:52:54:00:18:de:25 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-360866 Clientid:01:52:54:00:18:de:25}
	I0729 14:38:32.902680 1039759 main.go:141] libmachine: (old-k8s-version-360866) DBG | domain old-k8s-version-360866 has defined IP address 192.168.39.71 and MAC address 52:54:00:18:de:25 in network mk-old-k8s-version-360866
	I0729 14:38:32.902928 1039759 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 14:38:32.908590 1039759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:32.921952 1039759 kubeadm.go:883] updating cluster {Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:38:32.922094 1039759 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 14:38:32.922141 1039759 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:32.969932 1039759 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 14:38:32.970003 1039759 ssh_runner.go:195] Run: which lz4
	I0729 14:38:32.974564 1039759 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 14:38:32.980128 1039759 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 14:38:32.980173 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 14:38:32.795590 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:35.295541 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:31.750580 1038758 main.go:141] libmachine: (no-preload-603534) Waiting to get IP...
	I0729 14:38:31.751732 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:31.752236 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:31.752340 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:31.752236 1040763 retry.go:31] will retry after 239.008836ms: waiting for machine to come up
	I0729 14:38:31.993011 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:31.993538 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:31.993569 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:31.993481 1040763 retry.go:31] will retry after 288.863538ms: waiting for machine to come up
	I0729 14:38:32.284306 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:32.284941 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:32.284980 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:32.284867 1040763 retry.go:31] will retry after 410.903425ms: waiting for machine to come up
	I0729 14:38:32.697686 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:32.698291 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:32.698322 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:32.698227 1040763 retry.go:31] will retry after 423.090324ms: waiting for machine to come up
	I0729 14:38:33.122914 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:33.123550 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:33.123579 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:33.123500 1040763 retry.go:31] will retry after 744.030348ms: waiting for machine to come up
	I0729 14:38:33.869849 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:33.870499 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:33.870523 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:33.870456 1040763 retry.go:31] will retry after 888.516658ms: waiting for machine to come up
	I0729 14:38:34.760145 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:34.760594 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:34.760627 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:34.760534 1040763 retry.go:31] will retry after 889.371631ms: waiting for machine to come up
	I0729 14:38:35.651169 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:35.651700 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:35.651731 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:35.651636 1040763 retry.go:31] will retry after 1.200333492s: waiting for machine to come up
	I0729 14:38:33.181695 1039440 pod_ready.go:102] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:35.672201 1039440 pod_ready.go:102] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:34.707140 1039759 crio.go:462] duration metric: took 1.732619622s to copy over tarball
	I0729 14:38:34.707232 1039759 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 14:38:37.740076 1039759 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.032804006s)
	I0729 14:38:37.740105 1039759 crio.go:469] duration metric: took 3.032930405s to extract the tarball
	I0729 14:38:37.740113 1039759 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 14:38:37.786934 1039759 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:37.827451 1039759 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 14:38:37.827484 1039759 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 14:38:37.827576 1039759 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:37.827606 1039759 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:37.827624 1039759 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 14:38:37.827678 1039759 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:37.827702 1039759 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:37.827607 1039759 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:37.827683 1039759 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 14:38:37.827678 1039759 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:37.829621 1039759 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:37.829709 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:37.829714 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:37.829714 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:37.829724 1039759 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 14:38:37.829628 1039759 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:37.829808 1039759 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 14:38:37.829625 1039759 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:38.113249 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:38.373433 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:38.378382 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:38.380909 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:38.382431 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 14:38:38.391678 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:38.392565 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:38.419739 1039759 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 14:38:38.491174 1039759 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 14:38:38.491255 1039759 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:38.491320 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.570681 1039759 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 14:38:38.570784 1039759 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 14:38:38.570832 1039759 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 14:38:38.570889 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.570792 1039759 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:38.570721 1039759 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 14:38:38.570966 1039759 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:38.570977 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.570992 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.576687 1039759 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 14:38:38.576728 1039759 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:38.576769 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.587650 1039759 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 14:38:38.587699 1039759 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:38.587701 1039759 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 14:38:38.587738 1039759 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 14:38:38.587753 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.587791 1039759 ssh_runner.go:195] Run: which crictl
	I0729 14:38:38.587866 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 14:38:38.587883 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 14:38:38.587913 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 14:38:38.587948 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 14:38:38.591209 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 14:38:38.599567 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 14:38:38.610869 1039759 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 14:38:38.742939 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 14:38:38.742974 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 14:38:38.743091 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 14:38:38.743098 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 14:38:38.745789 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 14:38:38.745857 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 14:38:38.753643 1039759 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 14:38:38.753704 1039759 cache_images.go:92] duration metric: took 926.203812ms to LoadCachedImages
	W0729 14:38:38.753790 1039759 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0729 14:38:38.753804 1039759 kubeadm.go:934] updating node { 192.168.39.71 8443 v1.20.0 crio true true} ...
	I0729 14:38:38.753931 1039759 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-360866 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:38:38.753992 1039759 ssh_runner.go:195] Run: crio config
	I0729 14:38:38.802220 1039759 cni.go:84] Creating CNI manager for ""
	I0729 14:38:38.802246 1039759 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:38:38.802258 1039759 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:38:38.802285 1039759 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.71 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-360866 NodeName:old-k8s-version-360866 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.71"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.71 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 14:38:38.802487 1039759 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.71
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-360866"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.71
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.71"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:38:38.802591 1039759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 14:38:38.816832 1039759 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:38:38.816934 1039759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:38:38.827468 1039759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 14:38:38.847125 1039759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 14:38:38.865302 1039759 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 14:38:37.795799 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:40.294979 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:36.853388 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:36.853944 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:36.853979 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:36.853881 1040763 retry.go:31] will retry after 1.750535475s: waiting for machine to come up
	I0729 14:38:38.605644 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:38.606135 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:38.606185 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:38.606079 1040763 retry.go:31] will retry after 2.245294623s: waiting for machine to come up
	I0729 14:38:40.853761 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:40.854277 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:40.854311 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:40.854214 1040763 retry.go:31] will retry after 1.864975071s: waiting for machine to come up
	I0729 14:38:38.299326 1039440 pod_ready.go:102] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:39.170692 1039440 pod_ready.go:92] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:39.170720 1039440 pod_ready.go:81] duration metric: took 8.008696752s for pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:39.170735 1039440 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:39.177419 1039440 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:39.177449 1039440 pod_ready.go:81] duration metric: took 6.705958ms for pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:39.177463 1039440 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.185538 1039440 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:41.185566 1039440 pod_ready.go:81] duration metric: took 2.008093791s for pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.185580 1039440 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-p6dv5" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.193833 1039440 pod_ready.go:92] pod "kube-proxy-p6dv5" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:41.193864 1039440 pod_ready.go:81] duration metric: took 8.275486ms for pod "kube-proxy-p6dv5" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.193878 1039440 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.200931 1039440 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:38:41.200963 1039440 pod_ready.go:81] duration metric: took 7.075212ms for pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:41.200978 1039440 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace to be "Ready" ...
	I0729 14:38:38.884267 1039759 ssh_runner.go:195] Run: grep 192.168.39.71	control-plane.minikube.internal$ /etc/hosts
	I0729 14:38:38.889206 1039759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.71	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:38.905643 1039759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:39.032065 1039759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:38:39.051892 1039759 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866 for IP: 192.168.39.71
	I0729 14:38:39.051991 1039759 certs.go:194] generating shared ca certs ...
	I0729 14:38:39.052019 1039759 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:39.052203 1039759 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:38:39.052258 1039759 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:38:39.052270 1039759 certs.go:256] generating profile certs ...
	I0729 14:38:39.091359 1039759 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/client.key
	I0729 14:38:39.091485 1039759 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.key.98c2aed0
	I0729 14:38:39.091554 1039759 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.key
	I0729 14:38:39.091718 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:38:39.091763 1039759 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:38:39.091776 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:38:39.091804 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:38:39.091835 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:38:39.091867 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:38:39.091924 1039759 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:39.092850 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:38:39.125528 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:38:39.153093 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:38:39.181324 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:38:39.235516 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 14:38:39.262599 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 14:38:39.293085 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:38:39.326318 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/old-k8s-version-360866/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 14:38:39.361548 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:38:39.386876 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:38:39.412529 1039759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:38:39.438418 1039759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:38:39.459519 1039759 ssh_runner.go:195] Run: openssl version
	I0729 14:38:39.466109 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:38:39.477941 1039759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:39.482748 1039759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:39.482820 1039759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:38:39.489099 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:38:39.500207 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:38:39.511513 1039759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:38:39.516125 1039759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:38:39.516183 1039759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:38:39.522297 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:38:39.533536 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:38:39.544996 1039759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:38:39.549681 1039759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:38:39.549733 1039759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:38:39.556332 1039759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:38:39.571393 1039759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:38:39.578420 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:38:39.586316 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:38:39.593450 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:38:39.600604 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:38:39.607483 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:38:39.614692 1039759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:38:39.621776 1039759 kubeadm.go:392] StartCluster: {Name:old-k8s-version-360866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-360866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:38:39.621893 1039759 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:38:39.621955 1039759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:38:39.673544 1039759 cri.go:89] found id: ""
	I0729 14:38:39.673634 1039759 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:38:39.687887 1039759 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 14:38:39.687912 1039759 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 14:38:39.687963 1039759 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 14:38:39.701616 1039759 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:38:39.702914 1039759 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-360866" does not appear in /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:38:39.703576 1039759 kubeconfig.go:62] /home/jenkins/minikube-integration/19338-974764/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-360866" cluster setting kubeconfig missing "old-k8s-version-360866" context setting]
	I0729 14:38:39.704951 1039759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:38:39.715056 1039759 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 14:38:39.728384 1039759 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.71
	I0729 14:38:39.728448 1039759 kubeadm.go:1160] stopping kube-system containers ...
	I0729 14:38:39.728466 1039759 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 14:38:39.728534 1039759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:38:39.778476 1039759 cri.go:89] found id: ""
	I0729 14:38:39.778561 1039759 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 14:38:39.800712 1039759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:38:39.813243 1039759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:38:39.813265 1039759 kubeadm.go:157] found existing configuration files:
	
	I0729 14:38:39.813323 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:38:39.824822 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:38:39.824897 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:38:39.834966 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:38:39.847660 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:38:39.847887 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:38:39.861117 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:38:39.873868 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:38:39.873936 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:38:39.884195 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:38:39.895155 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:38:39.895234 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:38:39.909138 1039759 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:38:39.920721 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:40.055932 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.173909 1039759 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.117933178s)
	I0729 14:38:41.173947 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.419684 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.550852 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:38:41.655941 1039759 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:38:41.656040 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:42.156080 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:42.656948 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:43.157127 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:43.656087 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:42.794217 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:45.293634 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:42.720182 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:42.720674 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:42.720701 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:42.720614 1040763 retry.go:31] will retry after 2.929394717s: waiting for machine to come up
	I0729 14:38:45.653508 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:45.654044 1038758 main.go:141] libmachine: (no-preload-603534) DBG | unable to find current IP address of domain no-preload-603534 in network mk-no-preload-603534
	I0729 14:38:45.654069 1038758 main.go:141] libmachine: (no-preload-603534) DBG | I0729 14:38:45.653993 1040763 retry.go:31] will retry after 4.133064498s: waiting for machine to come up
	I0729 14:38:43.208287 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:45.706607 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:44.156583 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:44.657199 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:45.156268 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:45.656786 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:46.156393 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:46.656151 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:47.156507 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:47.656922 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:48.156840 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:48.656756 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:47.294322 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:49.795189 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:49.789721 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.790248 1038758 main.go:141] libmachine: (no-preload-603534) Found IP for machine: 192.168.61.116
	I0729 14:38:49.790272 1038758 main.go:141] libmachine: (no-preload-603534) Reserving static IP address...
	I0729 14:38:49.790290 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has current primary IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.790823 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "no-preload-603534", mac: "52:54:00:bf:94:45", ip: "192.168.61.116"} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:49.790860 1038758 main.go:141] libmachine: (no-preload-603534) Reserved static IP address: 192.168.61.116
	I0729 14:38:49.790883 1038758 main.go:141] libmachine: (no-preload-603534) DBG | skip adding static IP to network mk-no-preload-603534 - found existing host DHCP lease matching {name: "no-preload-603534", mac: "52:54:00:bf:94:45", ip: "192.168.61.116"}
	I0729 14:38:49.790920 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Getting to WaitForSSH function...
	I0729 14:38:49.790937 1038758 main.go:141] libmachine: (no-preload-603534) Waiting for SSH to be available...
	I0729 14:38:49.793243 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.793646 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:49.793679 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.793820 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Using SSH client type: external
	I0729 14:38:49.793850 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa (-rw-------)
	I0729 14:38:49.793884 1038758 main.go:141] libmachine: (no-preload-603534) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 14:38:49.793899 1038758 main.go:141] libmachine: (no-preload-603534) DBG | About to run SSH command:
	I0729 14:38:49.793961 1038758 main.go:141] libmachine: (no-preload-603534) DBG | exit 0
	I0729 14:38:49.924827 1038758 main.go:141] libmachine: (no-preload-603534) DBG | SSH cmd err, output: <nil>: 
	I0729 14:38:49.925188 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetConfigRaw
	I0729 14:38:49.925835 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetIP
	I0729 14:38:49.928349 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.928799 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:49.928830 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.929091 1038758 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/config.json ...
	I0729 14:38:49.929313 1038758 machine.go:94] provisionDockerMachine start ...
	I0729 14:38:49.929334 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:49.929556 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:49.932040 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.932431 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:49.932473 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:49.932629 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:49.932798 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:49.932930 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:49.933033 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:49.933142 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:49.933313 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:49.933324 1038758 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 14:38:50.049016 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 14:38:50.049059 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:38:50.049328 1038758 buildroot.go:166] provisioning hostname "no-preload-603534"
	I0729 14:38:50.049354 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:38:50.049566 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.052138 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.052532 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.052561 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.052736 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.052918 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.053093 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.053269 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.053462 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:50.053641 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:50.053653 1038758 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-603534 && echo "no-preload-603534" | sudo tee /etc/hostname
	I0729 14:38:50.189302 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-603534
	
	I0729 14:38:50.189341 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.192559 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.192949 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.192974 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.193248 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.193476 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.193689 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.193870 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.194082 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:50.194305 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:50.194329 1038758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-603534' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-603534/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-603534' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 14:38:50.322506 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 14:38:50.322540 1038758 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-974764/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-974764/.minikube}
	I0729 14:38:50.322564 1038758 buildroot.go:174] setting up certificates
	I0729 14:38:50.322577 1038758 provision.go:84] configureAuth start
	I0729 14:38:50.322589 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetMachineName
	I0729 14:38:50.322938 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetIP
	I0729 14:38:50.325594 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.325957 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.325994 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.326139 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.328455 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.328803 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.328828 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.328950 1038758 provision.go:143] copyHostCerts
	I0729 14:38:50.329015 1038758 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem, removing ...
	I0729 14:38:50.329025 1038758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem
	I0729 14:38:50.329078 1038758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/ca.pem (1078 bytes)
	I0729 14:38:50.329165 1038758 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem, removing ...
	I0729 14:38:50.329173 1038758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem
	I0729 14:38:50.329192 1038758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/cert.pem (1123 bytes)
	I0729 14:38:50.329243 1038758 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem, removing ...
	I0729 14:38:50.329249 1038758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem
	I0729 14:38:50.329264 1038758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-974764/.minikube/key.pem (1675 bytes)
	I0729 14:38:50.329310 1038758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem org=jenkins.no-preload-603534 san=[127.0.0.1 192.168.61.116 localhost minikube no-preload-603534]
	I0729 14:38:50.447706 1038758 provision.go:177] copyRemoteCerts
	I0729 14:38:50.447777 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 14:38:50.447810 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.450714 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.451106 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.451125 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.451444 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.451679 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.451855 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.451975 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:38:50.539025 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 14:38:50.567887 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 14:38:50.594581 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 14:38:50.619475 1038758 provision.go:87] duration metric: took 296.880769ms to configureAuth
	I0729 14:38:50.619509 1038758 buildroot.go:189] setting minikube options for container-runtime
	I0729 14:38:50.619708 1038758 config.go:182] Loaded profile config "no-preload-603534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 14:38:50.619797 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.622753 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.623121 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.623151 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.623331 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.623519 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.623684 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.623813 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.623971 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:50.624151 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:50.624168 1038758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 14:38:50.895618 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 14:38:50.895649 1038758 machine.go:97] duration metric: took 966.320375ms to provisionDockerMachine
	I0729 14:38:50.895662 1038758 start.go:293] postStartSetup for "no-preload-603534" (driver="kvm2")
	I0729 14:38:50.895684 1038758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 14:38:50.895717 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:50.896084 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 14:38:50.896112 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:50.899586 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.899998 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:50.900031 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:50.900168 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:50.900424 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:50.900622 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:50.900799 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:38:50.987195 1038758 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 14:38:50.991924 1038758 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 14:38:50.991952 1038758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/addons for local assets ...
	I0729 14:38:50.992025 1038758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-974764/.minikube/files for local assets ...
	I0729 14:38:50.992111 1038758 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem -> 9820462.pem in /etc/ssl/certs
	I0729 14:38:50.992208 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 14:38:51.002048 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:38:51.029714 1038758 start.go:296] duration metric: took 134.037621ms for postStartSetup
	I0729 14:38:51.029758 1038758 fix.go:56] duration metric: took 19.66799406s for fixHost
	I0729 14:38:51.029782 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:51.032495 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.032819 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:51.032844 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.033049 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:51.033236 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:51.033377 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:51.033587 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:51.033767 1038758 main.go:141] libmachine: Using SSH client type: native
	I0729 14:38:51.034007 1038758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0729 14:38:51.034021 1038758 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 14:38:51.149481 1038758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722263931.130931233
	
	I0729 14:38:51.149510 1038758 fix.go:216] guest clock: 1722263931.130931233
	I0729 14:38:51.149520 1038758 fix.go:229] Guest: 2024-07-29 14:38:51.130931233 +0000 UTC Remote: 2024-07-29 14:38:51.029761931 +0000 UTC m=+354.409484230 (delta=101.169302ms)
	I0729 14:38:51.149575 1038758 fix.go:200] guest clock delta is within tolerance: 101.169302ms
	I0729 14:38:51.149583 1038758 start.go:83] releasing machines lock for "no-preload-603534", held for 19.787859214s
	I0729 14:38:51.149617 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:51.149923 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetIP
	I0729 14:38:51.152671 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.153054 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:51.153081 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.153298 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:51.153898 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:51.154092 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:38:51.154192 1038758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 14:38:51.154245 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:51.154349 1038758 ssh_runner.go:195] Run: cat /version.json
	I0729 14:38:51.154378 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:38:51.157173 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.157200 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.157560 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:51.157592 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.157635 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:51.157654 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:51.157955 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:51.157976 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:38:51.158169 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:51.158195 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:38:51.158370 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:51.158381 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:38:51.158565 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:38:51.158572 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:38:51.260806 1038758 ssh_runner.go:195] Run: systemctl --version
	I0729 14:38:51.266847 1038758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 14:38:51.412637 1038758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 14:38:51.418879 1038758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 14:38:51.418954 1038758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 14:38:51.435946 1038758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 14:38:51.435978 1038758 start.go:495] detecting cgroup driver to use...
	I0729 14:38:51.436061 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 14:38:51.457517 1038758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 14:38:51.472718 1038758 docker.go:217] disabling cri-docker service (if available) ...
	I0729 14:38:51.472811 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 14:38:51.487062 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 14:38:51.501410 1038758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 14:38:51.617292 1038758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 14:38:47.708063 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:49.708506 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:52.209337 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:51.764302 1038758 docker.go:233] disabling docker service ...
	I0729 14:38:51.764386 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 14:38:51.779137 1038758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 14:38:51.794372 1038758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 14:38:51.930402 1038758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 14:38:52.062691 1038758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 14:38:52.076796 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 14:38:52.095912 1038758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 14:38:52.095994 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.107507 1038758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 14:38:52.107588 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.119470 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.131252 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.141672 1038758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 14:38:52.152086 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.163682 1038758 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.189614 1038758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 14:38:52.200279 1038758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 14:38:52.211878 1038758 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 14:38:52.211943 1038758 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 14:38:52.224909 1038758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 14:38:52.234312 1038758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:38:52.357370 1038758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 14:38:52.492520 1038758 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 14:38:52.492622 1038758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 14:38:52.497537 1038758 start.go:563] Will wait 60s for crictl version
	I0729 14:38:52.497595 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:52.501292 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 14:38:52.544320 1038758 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 14:38:52.544428 1038758 ssh_runner.go:195] Run: crio --version
	I0729 14:38:52.575452 1038758 ssh_runner.go:195] Run: crio --version
	I0729 14:38:52.605920 1038758 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 14:38:49.156539 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:49.656397 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:50.156909 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:50.656968 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:51.156321 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:51.656183 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:52.157099 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:52.656725 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:53.157009 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:53.656787 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:51.796331 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:53.799083 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:52.607410 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetIP
	I0729 14:38:52.610017 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:52.610296 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:38:52.610330 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:38:52.610553 1038758 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 14:38:52.614659 1038758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:38:52.626967 1038758 kubeadm.go:883] updating cluster {Name:no-preload-603534 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-603534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 14:38:52.627087 1038758 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 14:38:52.627124 1038758 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 14:38:52.662824 1038758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 14:38:52.662852 1038758 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 14:38:52.662901 1038758 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:52.662968 1038758 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:52.663040 1038758 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 14:38:52.663043 1038758 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:52.663066 1038758 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:52.662987 1038758 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:52.662987 1038758 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:52.663017 1038758 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:52.664360 1038758 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 14:38:52.664947 1038758 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:52.664965 1038758 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:52.664954 1038758 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:52.665015 1038758 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:52.665023 1038758 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:52.665351 1038758 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:52.665423 1038758 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:52.829143 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:52.829158 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:52.829541 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:52.851797 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:52.866728 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 14:38:52.884604 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:52.893636 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:52.946087 1038758 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 14:38:52.946134 1038758 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 14:38:52.946160 1038758 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:52.946170 1038758 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:52.946173 1038758 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 14:38:52.946192 1038758 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:52.946216 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:52.946221 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:52.946217 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:52.954361 1038758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:53.001715 1038758 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 14:38:53.001766 1038758 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:53.001826 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:53.106651 1038758 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 14:38:53.106713 1038758 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:53.106770 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:53.106838 1038758 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 14:38:53.106883 1038758 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:53.106921 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:53.106927 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 14:38:53.106962 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 14:38:53.107012 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 14:38:53.107038 1038758 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 14:38:53.107067 1038758 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:53.107079 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 14:38:53.107092 1038758 ssh_runner.go:195] Run: which crictl
	I0729 14:38:53.131562 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 14:38:53.212076 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:38:53.212199 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 14:38:53.212272 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 14:38:53.214338 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 14:38:53.214430 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 14:38:53.216771 1038758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 14:38:53.216941 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 14:38:53.217037 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 14:38:53.220214 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 14:38:53.220306 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 14:38:53.272021 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 14:38:53.272140 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 14:38:53.275939 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 14:38:53.275988 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 14:38:53.276008 1038758 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 14:38:53.276009 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 14:38:53.276029 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 14:38:53.276054 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 14:38:53.301528 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 14:38:53.301578 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 14:38:53.301600 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 14:38:53.301647 1038758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 14:38:53.301759 1038758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 14:38:55.357295 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.08120738s)
	I0729 14:38:55.357329 1038758 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.081270007s)
	I0729 14:38:55.357371 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 14:38:55.357338 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 14:38:55.357384 1038758 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.055605102s)
	I0729 14:38:55.357406 1038758 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 14:38:55.357407 1038758 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 14:38:55.357464 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 14:38:54.708330 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:57.207468 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:54.156921 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:54.656957 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:55.156201 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:55.656783 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:56.156180 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:56.656984 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:57.156610 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:57.656127 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:58.156785 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:58.656192 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:56.295143 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:58.795511 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:57.217512 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.860011805s)
	I0729 14:38:57.217539 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 14:38:57.217570 1038758 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 14:38:57.217634 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 14:38:59.187398 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.969733063s)
	I0729 14:38:59.187443 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 14:38:59.187482 1038758 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 14:38:59.187562 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 14:39:01.138568 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.950970137s)
	I0729 14:39:01.138617 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 14:39:01.138654 1038758 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 14:39:01.138740 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 14:38:59.207657 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:01.208795 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:38:59.156740 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:38:59.656223 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:00.156726 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:00.656593 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:01.156115 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:01.656364 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:02.157069 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:02.656491 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:03.156938 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:03.656898 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:01.293858 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:03.484613 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:05.793953 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:04.231830 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.093043665s)
	I0729 14:39:04.231866 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 14:39:04.231897 1038758 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 14:39:04.231963 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 14:39:05.182458 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 14:39:05.182512 1038758 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 14:39:05.182566 1038758 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 14:39:03.209198 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:05.707557 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:04.157177 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:04.656505 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:05.156530 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:05.656389 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:06.156606 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:06.657121 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:07.157048 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:07.656497 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:08.156327 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:08.656868 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:07.794522 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:09.794886 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:07.253615 1038758 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.070972791s)
	I0729 14:39:07.253665 1038758 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19338-974764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 14:39:07.253700 1038758 cache_images.go:123] Successfully loaded all cached images
	I0729 14:39:07.253707 1038758 cache_images.go:92] duration metric: took 14.590842072s to LoadCachedImages
	I0729 14:39:07.253720 1038758 kubeadm.go:934] updating node { 192.168.61.116 8443 v1.31.0-beta.0 crio true true} ...
	I0729 14:39:07.253899 1038758 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-603534 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-603534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 14:39:07.253980 1038758 ssh_runner.go:195] Run: crio config
	I0729 14:39:07.309694 1038758 cni.go:84] Creating CNI manager for ""
	I0729 14:39:07.309720 1038758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:39:07.309731 1038758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 14:39:07.309754 1038758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-603534 NodeName:no-preload-603534 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 14:39:07.309916 1038758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-603534"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 14:39:07.309985 1038758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 14:39:07.321876 1038758 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 14:39:07.321967 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 14:39:07.333057 1038758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0729 14:39:07.350193 1038758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 14:39:07.367171 1038758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0729 14:39:07.384123 1038758 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0729 14:39:07.387896 1038758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 14:39:07.400317 1038758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:39:07.525822 1038758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:39:07.545142 1038758 certs.go:68] Setting up /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534 for IP: 192.168.61.116
	I0729 14:39:07.545167 1038758 certs.go:194] generating shared ca certs ...
	I0729 14:39:07.545189 1038758 certs.go:226] acquiring lock for ca certs: {Name:mk49ca2c0d607456f32457f31c51812910fb9911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:39:07.545389 1038758 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key
	I0729 14:39:07.545448 1038758 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key
	I0729 14:39:07.545463 1038758 certs.go:256] generating profile certs ...
	I0729 14:39:07.545582 1038758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/client.key
	I0729 14:39:07.545658 1038758 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/apiserver.key.117a155a
	I0729 14:39:07.545725 1038758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/proxy-client.key
	I0729 14:39:07.545881 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem (1338 bytes)
	W0729 14:39:07.545913 1038758 certs.go:480] ignoring /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046_empty.pem, impossibly tiny 0 bytes
	I0729 14:39:07.545922 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 14:39:07.545945 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/ca.pem (1078 bytes)
	I0729 14:39:07.545969 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/cert.pem (1123 bytes)
	I0729 14:39:07.545990 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/certs/key.pem (1675 bytes)
	I0729 14:39:07.546027 1038758 certs.go:484] found cert: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem (1708 bytes)
	I0729 14:39:07.546679 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 14:39:07.582488 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 14:39:07.617327 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 14:39:07.647627 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 14:39:07.685799 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 14:39:07.720365 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 14:39:07.744627 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 14:39:07.771409 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/no-preload-603534/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 14:39:07.797570 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/ssl/certs/9820462.pem --> /usr/share/ca-certificates/9820462.pem (1708 bytes)
	I0729 14:39:07.820888 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 14:39:07.843714 1038758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-974764/.minikube/certs/982046.pem --> /usr/share/ca-certificates/982046.pem (1338 bytes)
	I0729 14:39:07.867365 1038758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 14:39:07.884283 1038758 ssh_runner.go:195] Run: openssl version
	I0729 14:39:07.890379 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9820462.pem && ln -fs /usr/share/ca-certificates/9820462.pem /etc/ssl/certs/9820462.pem"
	I0729 14:39:07.901894 1038758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9820462.pem
	I0729 14:39:07.906431 1038758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 13:24 /usr/share/ca-certificates/9820462.pem
	I0729 14:39:07.906487 1038758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9820462.pem
	I0729 14:39:07.912284 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9820462.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 14:39:07.923393 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 14:39:07.934119 1038758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:39:07.938563 1038758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 13:12 /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:39:07.938620 1038758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 14:39:07.944115 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 14:39:07.954815 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/982046.pem && ln -fs /usr/share/ca-certificates/982046.pem /etc/ssl/certs/982046.pem"
	I0729 14:39:07.965864 1038758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/982046.pem
	I0729 14:39:07.970695 1038758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 13:24 /usr/share/ca-certificates/982046.pem
	I0729 14:39:07.970761 1038758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/982046.pem
	I0729 14:39:07.977340 1038758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/982046.pem /etc/ssl/certs/51391683.0"
	I0729 14:39:07.990416 1038758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 14:39:07.995446 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 14:39:08.001615 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 14:39:08.007621 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 14:39:08.013648 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 14:39:08.019525 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 14:39:08.025505 1038758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 14:39:08.031480 1038758 kubeadm.go:392] StartCluster: {Name:no-preload-603534 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-603534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 14:39:08.031592 1038758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 14:39:08.031657 1038758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:39:08.077847 1038758 cri.go:89] found id: ""
	I0729 14:39:08.077936 1038758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 14:39:08.088616 1038758 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 14:39:08.088639 1038758 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 14:39:08.088704 1038758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 14:39:08.101150 1038758 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 14:39:08.102305 1038758 kubeconfig.go:125] found "no-preload-603534" server: "https://192.168.61.116:8443"
	I0729 14:39:08.105529 1038758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 14:39:08.117031 1038758 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.116
	I0729 14:39:08.117070 1038758 kubeadm.go:1160] stopping kube-system containers ...
	I0729 14:39:08.117085 1038758 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 14:39:08.117148 1038758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 14:39:08.171626 1038758 cri.go:89] found id: ""
	I0729 14:39:08.171706 1038758 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 14:39:08.190491 1038758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:39:08.200776 1038758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:39:08.200806 1038758 kubeadm.go:157] found existing configuration files:
	
	I0729 14:39:08.200873 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:39:08.211430 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:39:08.211483 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:39:08.221865 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:39:08.231668 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:39:08.231719 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:39:08.242027 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:39:08.251585 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:39:08.251639 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:39:08.261521 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:39:08.271210 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:39:08.271284 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:39:08.281112 1038758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:39:08.290948 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:08.417397 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:09.400064 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:09.590859 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:09.670134 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:09.781580 1038758 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:39:09.781719 1038758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.282592 1038758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.781923 1038758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.843114 1038758 api_server.go:72] duration metric: took 1.061535691s to wait for apiserver process to appear ...
	I0729 14:39:10.843151 1038758 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:39:10.843182 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:10.843715 1038758 api_server.go:269] stopped: https://192.168.61.116:8443/healthz: Get "https://192.168.61.116:8443/healthz": dial tcp 192.168.61.116:8443: connect: connection refused
	I0729 14:39:11.343301 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:08.207563 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:10.208276 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:09.156858 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:09.656910 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.156126 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:10.657149 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:11.156223 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:11.657184 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:12.156454 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:12.656896 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:13.156693 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:13.656971 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:13.993249 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:39:13.993278 1038758 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:39:13.993298 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:14.011972 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 14:39:14.012012 1038758 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 14:39:14.343228 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:14.347946 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:39:14.347983 1038758 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:39:14.844144 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:14.858278 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 14:39:14.858311 1038758 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 14:39:15.343885 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:39:15.350223 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0729 14:39:15.360468 1038758 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 14:39:15.360513 1038758 api_server.go:131] duration metric: took 4.517353977s to wait for apiserver health ...
	I0729 14:39:15.360524 1038758 cni.go:84] Creating CNI manager for ""
	I0729 14:39:15.360532 1038758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:39:15.362679 1038758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:39:12.293516 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:14.294107 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:15.364237 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:39:15.379974 1038758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:39:15.422444 1038758 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:39:15.441468 1038758 system_pods.go:59] 8 kube-system pods found
	I0729 14:39:15.441512 1038758 system_pods.go:61] "coredns-5cfdc65f69-tjdx4" [986cdef3-de61-4c0f-bc75-fae4f6b44a37] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 14:39:15.441525 1038758 system_pods.go:61] "etcd-no-preload-603534" [e27f5761-5322-4d88-b90a-bcff42c9dfa5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 14:39:15.441537 1038758 system_pods.go:61] "kube-apiserver-no-preload-603534" [33ed9f7c-1240-40cf-b51d-125b3473bfd5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 14:39:15.441547 1038758 system_pods.go:61] "kube-controller-manager-no-preload-603534" [f79520a2-380e-4d8a-b1ff-78c6cd3d3741] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 14:39:15.441559 1038758 system_pods.go:61] "kube-proxy-ftpk5" [a5471ad7-5fd3-49b7-8631-4ca2962761d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 14:39:15.441568 1038758 system_pods.go:61] "kube-scheduler-no-preload-603534" [860e262c-f036-4181-a0ad-8ba0058a47d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 14:39:15.441580 1038758 system_pods.go:61] "metrics-server-78fcd8795b-59sbc" [8af92987-ce8d-434f-93de-16d0adc35fa5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:39:15.441598 1038758 system_pods.go:61] "storage-provisioner" [579d0cc8-e30e-4ee3-ac55-c2f0bc5871e1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 14:39:15.441606 1038758 system_pods.go:74] duration metric: took 19.133029ms to wait for pod list to return data ...
	I0729 14:39:15.441618 1038758 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:39:15.445594 1038758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:39:15.445630 1038758 node_conditions.go:123] node cpu capacity is 2
	I0729 14:39:15.445646 1038758 node_conditions.go:105] duration metric: took 4.019018ms to run NodePressure ...
	I0729 14:39:15.445678 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 14:39:15.743404 1038758 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 14:39:15.751028 1038758 kubeadm.go:739] kubelet initialised
	I0729 14:39:15.751050 1038758 kubeadm.go:740] duration metric: took 7.619795ms waiting for restarted kubelet to initialise ...
	I0729 14:39:15.751059 1038758 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:39:15.759157 1038758 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:12.708704 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:15.208434 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:14.157127 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:14.656806 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:15.156564 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:15.656881 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:16.156239 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:16.656440 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:17.157130 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:17.656240 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:18.156161 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:18.656808 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:16.294741 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:18.797700 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:17.768132 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:20.265670 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:17.709929 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:20.206710 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:22.207809 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:19.156721 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:19.656766 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:20.156352 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:20.656788 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:21.156179 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:21.656213 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:22.156475 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:22.656275 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:23.156592 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:23.656979 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:21.294265 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:23.294366 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:25.794648 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:22.265947 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:24.266644 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:24.708214 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:27.208824 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:24.156798 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:24.656473 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:25.156551 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:25.656356 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:26.156887 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:26.656332 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:27.156494 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:27.656839 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:28.156763 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:28.656512 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:27.795415 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:30.293460 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:26.766260 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:29.265817 1038758 pod_ready.go:92] pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.265851 1038758 pod_ready.go:81] duration metric: took 13.506661461s for pod "coredns-5cfdc65f69-tjdx4" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.265865 1038758 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.276021 1038758 pod_ready.go:92] pod "etcd-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.276043 1038758 pod_ready.go:81] duration metric: took 10.172055ms for pod "etcd-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.276052 1038758 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.280197 1038758 pod_ready.go:92] pod "kube-apiserver-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.280215 1038758 pod_ready.go:81] duration metric: took 4.156785ms for pod "kube-apiserver-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.280223 1038758 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.284076 1038758 pod_ready.go:92] pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.284096 1038758 pod_ready.go:81] duration metric: took 3.865927ms for pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.284122 1038758 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ftpk5" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.288280 1038758 pod_ready.go:92] pod "kube-proxy-ftpk5" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.288297 1038758 pod_ready.go:81] duration metric: took 4.16843ms for pod "kube-proxy-ftpk5" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.288305 1038758 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.666771 1038758 pod_ready.go:92] pod "kube-scheduler-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:39:29.666802 1038758 pod_ready.go:81] duration metric: took 378.49001ms for pod "kube-scheduler-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.666813 1038758 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace to be "Ready" ...
	I0729 14:39:29.706596 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:32.208095 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:29.156096 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:29.656289 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:30.156693 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:30.656795 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:31.156756 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:31.656888 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:32.156563 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:32.656795 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:33.156271 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:33.656562 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:32.293988 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:34.793456 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:31.674203 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:34.174002 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:34.708005 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:37.206789 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:34.157046 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:34.656398 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:35.156198 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:35.656763 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:36.156542 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:36.656994 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:37.156808 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:37.657093 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:38.156119 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:38.657017 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:36.793771 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:39.294267 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:36.676693 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:39.172713 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:41.174348 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:39.207584 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:41.707645 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:39.156909 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:39.656176 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:40.156455 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:40.656609 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:41.156891 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:41.656327 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:41.656423 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:41.701839 1039759 cri.go:89] found id: ""
	I0729 14:39:41.701863 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.701872 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:41.701878 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:41.701942 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:41.738281 1039759 cri.go:89] found id: ""
	I0729 14:39:41.738308 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.738315 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:41.738321 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:41.738377 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:41.771954 1039759 cri.go:89] found id: ""
	I0729 14:39:41.771981 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.771990 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:41.771996 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:41.772060 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:41.806157 1039759 cri.go:89] found id: ""
	I0729 14:39:41.806182 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.806190 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:41.806196 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:41.806249 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:41.841284 1039759 cri.go:89] found id: ""
	I0729 14:39:41.841312 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.841319 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:41.841325 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:41.841394 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:41.875864 1039759 cri.go:89] found id: ""
	I0729 14:39:41.875893 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.875902 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:41.875908 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:41.875962 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:41.909797 1039759 cri.go:89] found id: ""
	I0729 14:39:41.909824 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.909833 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:41.909840 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:41.909892 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:41.943886 1039759 cri.go:89] found id: ""
	I0729 14:39:41.943912 1039759 logs.go:276] 0 containers: []
	W0729 14:39:41.943920 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:41.943929 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:41.943944 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:41.983224 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:41.983254 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:42.035264 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:42.035303 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:42.049343 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:42.049369 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:42.171904 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:42.171924 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:42.171947 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:41.295209 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:43.795811 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:43.673853 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:45.674302 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:44.207555 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:46.707384 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:44.738629 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:44.753497 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:44.753582 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:44.793256 1039759 cri.go:89] found id: ""
	I0729 14:39:44.793283 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.793291 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:44.793298 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:44.793363 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:44.833698 1039759 cri.go:89] found id: ""
	I0729 14:39:44.833726 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.833733 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:44.833739 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:44.833792 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:44.876328 1039759 cri.go:89] found id: ""
	I0729 14:39:44.876357 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.876366 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:44.876372 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:44.876471 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:44.918091 1039759 cri.go:89] found id: ""
	I0729 14:39:44.918121 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.918132 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:44.918140 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:44.918210 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:44.965105 1039759 cri.go:89] found id: ""
	I0729 14:39:44.965137 1039759 logs.go:276] 0 containers: []
	W0729 14:39:44.965149 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:44.965157 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:44.965228 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:45.014119 1039759 cri.go:89] found id: ""
	I0729 14:39:45.014150 1039759 logs.go:276] 0 containers: []
	W0729 14:39:45.014162 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:45.014170 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:45.014243 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:45.059826 1039759 cri.go:89] found id: ""
	I0729 14:39:45.059858 1039759 logs.go:276] 0 containers: []
	W0729 14:39:45.059870 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:45.059879 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:45.059946 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:45.099666 1039759 cri.go:89] found id: ""
	I0729 14:39:45.099706 1039759 logs.go:276] 0 containers: []
	W0729 14:39:45.099717 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:45.099730 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:45.099748 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:45.144219 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:45.144263 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:45.199719 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:45.199754 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:45.214225 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:45.214260 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:45.289090 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:45.289119 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:45.289138 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:47.860797 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:47.874523 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:47.874606 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:47.913570 1039759 cri.go:89] found id: ""
	I0729 14:39:47.913599 1039759 logs.go:276] 0 containers: []
	W0729 14:39:47.913608 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:47.913615 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:47.913674 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:47.946699 1039759 cri.go:89] found id: ""
	I0729 14:39:47.946725 1039759 logs.go:276] 0 containers: []
	W0729 14:39:47.946734 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:47.946740 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:47.946792 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:47.986492 1039759 cri.go:89] found id: ""
	I0729 14:39:47.986533 1039759 logs.go:276] 0 containers: []
	W0729 14:39:47.986546 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:47.986554 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:47.986635 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:48.027232 1039759 cri.go:89] found id: ""
	I0729 14:39:48.027260 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.027268 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:48.027274 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:48.027327 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:48.065119 1039759 cri.go:89] found id: ""
	I0729 14:39:48.065145 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.065153 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:48.065159 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:48.065217 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:48.105087 1039759 cri.go:89] found id: ""
	I0729 14:39:48.105119 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.105128 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:48.105134 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:48.105199 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:48.144684 1039759 cri.go:89] found id: ""
	I0729 14:39:48.144718 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.144730 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:48.144737 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:48.144816 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:48.180350 1039759 cri.go:89] found id: ""
	I0729 14:39:48.180380 1039759 logs.go:276] 0 containers: []
	W0729 14:39:48.180389 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:48.180401 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:48.180436 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:48.259859 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:48.259905 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:48.301132 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:48.301163 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:48.352753 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:48.352795 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:48.365936 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:48.365969 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:48.434634 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:46.293123 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:48.293674 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:50.294113 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:47.674411 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:50.173727 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:48.707887 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:51.207444 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:50.934903 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:50.948702 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:50.948787 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:50.982889 1039759 cri.go:89] found id: ""
	I0729 14:39:50.982917 1039759 logs.go:276] 0 containers: []
	W0729 14:39:50.982927 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:50.982933 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:50.983010 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:51.020679 1039759 cri.go:89] found id: ""
	I0729 14:39:51.020713 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.020726 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:51.020734 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:51.020818 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:51.055114 1039759 cri.go:89] found id: ""
	I0729 14:39:51.055147 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.055158 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:51.055166 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:51.055237 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:51.089053 1039759 cri.go:89] found id: ""
	I0729 14:39:51.089087 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.089099 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:51.089108 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:51.089184 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:51.125823 1039759 cri.go:89] found id: ""
	I0729 14:39:51.125851 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.125861 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:51.125868 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:51.125938 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:51.162645 1039759 cri.go:89] found id: ""
	I0729 14:39:51.162683 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.162694 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:51.162702 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:51.162767 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:51.196820 1039759 cri.go:89] found id: ""
	I0729 14:39:51.196849 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.196857 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:51.196864 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:51.196937 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:51.236442 1039759 cri.go:89] found id: ""
	I0729 14:39:51.236469 1039759 logs.go:276] 0 containers: []
	W0729 14:39:51.236479 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:51.236491 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:51.236506 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:51.317077 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:51.317101 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:51.317119 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:51.398118 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:51.398172 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:51.437096 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:51.437128 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:51.488949 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:51.488992 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:52.795544 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:55.294184 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:52.174241 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:54.672702 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:53.207592 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:55.706971 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:54.004536 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:54.019400 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:54.019480 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:54.054592 1039759 cri.go:89] found id: ""
	I0729 14:39:54.054626 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.054639 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:54.054647 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:54.054712 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:54.090184 1039759 cri.go:89] found id: ""
	I0729 14:39:54.090217 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.090227 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:54.090234 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:54.090304 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:54.129977 1039759 cri.go:89] found id: ""
	I0729 14:39:54.130007 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.130016 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:54.130022 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:54.130081 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:54.170940 1039759 cri.go:89] found id: ""
	I0729 14:39:54.170970 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.170980 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:54.170988 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:54.171053 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:54.206197 1039759 cri.go:89] found id: ""
	I0729 14:39:54.206224 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.206244 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:54.206251 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:54.206340 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:54.246929 1039759 cri.go:89] found id: ""
	I0729 14:39:54.246963 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.246973 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:54.246980 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:54.247049 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:54.286202 1039759 cri.go:89] found id: ""
	I0729 14:39:54.286231 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.286240 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:54.286245 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:54.286315 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:54.321784 1039759 cri.go:89] found id: ""
	I0729 14:39:54.321815 1039759 logs.go:276] 0 containers: []
	W0729 14:39:54.321824 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:54.321837 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:54.321860 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:54.363159 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:54.363187 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:54.416151 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:54.416194 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:54.429824 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:54.429852 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:54.506348 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:54.506373 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:54.506390 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:57.094810 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:39:57.108163 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:39:57.108238 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:39:57.143556 1039759 cri.go:89] found id: ""
	I0729 14:39:57.143588 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.143601 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:39:57.143608 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:39:57.143678 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:39:57.177664 1039759 cri.go:89] found id: ""
	I0729 14:39:57.177695 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.177706 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:39:57.177714 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:39:57.177801 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:39:57.212046 1039759 cri.go:89] found id: ""
	I0729 14:39:57.212106 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.212231 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:39:57.212249 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:39:57.212323 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:39:57.252518 1039759 cri.go:89] found id: ""
	I0729 14:39:57.252549 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.252558 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:39:57.252564 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:39:57.252677 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:39:57.287045 1039759 cri.go:89] found id: ""
	I0729 14:39:57.287069 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.287077 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:39:57.287084 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:39:57.287141 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:39:57.324553 1039759 cri.go:89] found id: ""
	I0729 14:39:57.324588 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.324599 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:39:57.324607 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:39:57.324684 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:39:57.358761 1039759 cri.go:89] found id: ""
	I0729 14:39:57.358801 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.358811 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:39:57.358819 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:39:57.358898 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:39:57.402023 1039759 cri.go:89] found id: ""
	I0729 14:39:57.402051 1039759 logs.go:276] 0 containers: []
	W0729 14:39:57.402062 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:39:57.402074 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:39:57.402094 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:39:57.445600 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:39:57.445632 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:39:57.501876 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:39:57.501911 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:39:57.518264 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:39:57.518299 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:39:57.593247 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:39:57.593274 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:39:57.593292 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:39:57.793782 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:59.794287 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:56.673243 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:59.174416 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:39:57.707618 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:00.208574 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:00.181109 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:00.194553 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:00.194641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:00.237761 1039759 cri.go:89] found id: ""
	I0729 14:40:00.237801 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.237814 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:00.237829 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:00.237901 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:00.273113 1039759 cri.go:89] found id: ""
	I0729 14:40:00.273145 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.273157 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:00.273166 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:00.273232 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:00.312136 1039759 cri.go:89] found id: ""
	I0729 14:40:00.312169 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.312176 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:00.312182 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:00.312249 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:00.349610 1039759 cri.go:89] found id: ""
	I0729 14:40:00.349642 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.349654 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:00.349662 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:00.349792 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:00.384121 1039759 cri.go:89] found id: ""
	I0729 14:40:00.384148 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.384157 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:00.384163 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:00.384233 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:00.419684 1039759 cri.go:89] found id: ""
	I0729 14:40:00.419720 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.419731 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:00.419739 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:00.419809 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:00.453905 1039759 cri.go:89] found id: ""
	I0729 14:40:00.453937 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.453945 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:00.453951 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:00.454023 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:00.490124 1039759 cri.go:89] found id: ""
	I0729 14:40:00.490149 1039759 logs.go:276] 0 containers: []
	W0729 14:40:00.490158 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:00.490168 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:00.490185 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:00.562684 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:00.562713 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:00.562735 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:00.643860 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:00.643899 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:00.683247 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:00.683276 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:00.734131 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:00.734166 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:03.249468 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:03.262712 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:03.262788 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:03.300774 1039759 cri.go:89] found id: ""
	I0729 14:40:03.300801 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.300816 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:03.300823 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:03.300891 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:03.335367 1039759 cri.go:89] found id: ""
	I0729 14:40:03.335398 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.335409 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:03.335419 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:03.335488 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:03.375683 1039759 cri.go:89] found id: ""
	I0729 14:40:03.375717 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.375728 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:03.375734 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:03.375814 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:03.409593 1039759 cri.go:89] found id: ""
	I0729 14:40:03.409623 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.409631 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:03.409637 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:03.409711 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:03.444531 1039759 cri.go:89] found id: ""
	I0729 14:40:03.444566 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.444578 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:03.444585 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:03.444655 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:03.479446 1039759 cri.go:89] found id: ""
	I0729 14:40:03.479476 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.479487 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:03.479495 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:03.479563 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:03.517277 1039759 cri.go:89] found id: ""
	I0729 14:40:03.517311 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.517321 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:03.517329 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:03.517396 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:03.556343 1039759 cri.go:89] found id: ""
	I0729 14:40:03.556373 1039759 logs.go:276] 0 containers: []
	W0729 14:40:03.556381 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:03.556391 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:03.556422 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:03.610156 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:03.610196 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:03.624776 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:03.624812 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:03.696584 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:03.696609 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:03.696625 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:03.775066 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:03.775109 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:01.794683 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:03.795112 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:01.673980 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:04.173900 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:02.706731 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:04.707655 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:07.207027 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:06.319720 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:06.332865 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:06.332937 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:06.366576 1039759 cri.go:89] found id: ""
	I0729 14:40:06.366608 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.366631 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:06.366639 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:06.366730 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:06.402710 1039759 cri.go:89] found id: ""
	I0729 14:40:06.402735 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.402743 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:06.402748 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:06.402804 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:06.439048 1039759 cri.go:89] found id: ""
	I0729 14:40:06.439095 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.439116 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:06.439125 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:06.439196 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:06.473407 1039759 cri.go:89] found id: ""
	I0729 14:40:06.473443 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.473456 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:06.473464 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:06.473544 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:06.507278 1039759 cri.go:89] found id: ""
	I0729 14:40:06.507309 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.507319 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:06.507327 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:06.507396 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:06.541573 1039759 cri.go:89] found id: ""
	I0729 14:40:06.541600 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.541608 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:06.541617 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:06.541679 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:06.587666 1039759 cri.go:89] found id: ""
	I0729 14:40:06.587697 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.587707 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:06.587714 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:06.587773 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:06.622415 1039759 cri.go:89] found id: ""
	I0729 14:40:06.622448 1039759 logs.go:276] 0 containers: []
	W0729 14:40:06.622459 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:06.622478 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:06.622497 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:06.659987 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:06.660019 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:06.716303 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:06.716338 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:06.731051 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:06.731076 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:06.809014 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:06.809045 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:06.809064 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:06.293552 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:08.294453 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:10.295216 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:06.674445 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:09.174349 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:09.207784 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:11.208318 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:09.387843 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:09.401894 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:09.401984 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:09.439385 1039759 cri.go:89] found id: ""
	I0729 14:40:09.439425 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.439438 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:09.439446 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:09.439506 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:09.474307 1039759 cri.go:89] found id: ""
	I0729 14:40:09.474340 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.474352 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:09.474361 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:09.474434 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:09.508198 1039759 cri.go:89] found id: ""
	I0729 14:40:09.508233 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.508245 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:09.508253 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:09.508325 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:09.543729 1039759 cri.go:89] found id: ""
	I0729 14:40:09.543762 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.543772 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:09.543779 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:09.543847 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:09.598723 1039759 cri.go:89] found id: ""
	I0729 14:40:09.598760 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.598769 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:09.598775 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:09.598841 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:09.636009 1039759 cri.go:89] found id: ""
	I0729 14:40:09.636038 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.636050 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:09.636058 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:09.636126 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:09.675590 1039759 cri.go:89] found id: ""
	I0729 14:40:09.675618 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.675628 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:09.675636 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:09.675698 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:09.710331 1039759 cri.go:89] found id: ""
	I0729 14:40:09.710374 1039759 logs.go:276] 0 containers: []
	W0729 14:40:09.710385 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:09.710397 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:09.710416 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:09.790014 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:09.790046 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:09.790064 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:09.870233 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:09.870278 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:09.910421 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:09.910454 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:09.962429 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:09.962474 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:12.476775 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:12.490804 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:12.490875 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:12.529435 1039759 cri.go:89] found id: ""
	I0729 14:40:12.529466 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.529478 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:12.529485 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:12.529551 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:12.564769 1039759 cri.go:89] found id: ""
	I0729 14:40:12.564806 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.564818 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:12.564826 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:12.564912 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:12.600253 1039759 cri.go:89] found id: ""
	I0729 14:40:12.600285 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.600296 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:12.600304 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:12.600367 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:12.636112 1039759 cri.go:89] found id: ""
	I0729 14:40:12.636146 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.636155 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:12.636161 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:12.636216 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:12.675592 1039759 cri.go:89] found id: ""
	I0729 14:40:12.675621 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.675631 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:12.675639 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:12.675711 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:12.711438 1039759 cri.go:89] found id: ""
	I0729 14:40:12.711469 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.711480 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:12.711488 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:12.711554 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:12.745497 1039759 cri.go:89] found id: ""
	I0729 14:40:12.745524 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.745533 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:12.745539 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:12.745598 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:12.778934 1039759 cri.go:89] found id: ""
	I0729 14:40:12.778966 1039759 logs.go:276] 0 containers: []
	W0729 14:40:12.778977 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:12.778991 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:12.779010 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:12.854721 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:12.854759 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:12.854780 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:12.932118 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:12.932158 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:12.974429 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:12.974461 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:13.030073 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:13.030108 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:12.795056 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:15.295125 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:11.674169 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:14.173503 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:16.175691 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:13.707268 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:15.708540 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:15.544245 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:15.559013 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:15.559090 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:15.594018 1039759 cri.go:89] found id: ""
	I0729 14:40:15.594051 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.594064 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:15.594076 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:15.594147 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:15.630734 1039759 cri.go:89] found id: ""
	I0729 14:40:15.630762 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.630771 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:15.630777 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:15.630832 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:15.666159 1039759 cri.go:89] found id: ""
	I0729 14:40:15.666191 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.666202 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:15.666210 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:15.666275 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:15.701058 1039759 cri.go:89] found id: ""
	I0729 14:40:15.701088 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.701098 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:15.701115 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:15.701170 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:15.737006 1039759 cri.go:89] found id: ""
	I0729 14:40:15.737040 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.737052 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:15.737066 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:15.737139 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:15.775678 1039759 cri.go:89] found id: ""
	I0729 14:40:15.775706 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.775718 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:15.775728 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:15.775795 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:15.812239 1039759 cri.go:89] found id: ""
	I0729 14:40:15.812268 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.812276 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:15.812283 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:15.812348 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:15.847653 1039759 cri.go:89] found id: ""
	I0729 14:40:15.847682 1039759 logs.go:276] 0 containers: []
	W0729 14:40:15.847693 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:15.847707 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:15.847725 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:15.903094 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:15.903137 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:15.917060 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:15.917093 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:15.993458 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:15.993481 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:15.993499 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:16.073369 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:16.073409 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:18.616107 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:18.630263 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:18.630340 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:18.668228 1039759 cri.go:89] found id: ""
	I0729 14:40:18.668261 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.668271 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:18.668279 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:18.668342 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:18.706863 1039759 cri.go:89] found id: ""
	I0729 14:40:18.706891 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.706902 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:18.706909 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:18.706978 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:18.739703 1039759 cri.go:89] found id: ""
	I0729 14:40:18.739728 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.739736 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:18.739742 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:18.739796 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:18.777025 1039759 cri.go:89] found id: ""
	I0729 14:40:18.777066 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.777077 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:18.777085 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:18.777158 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:18.814000 1039759 cri.go:89] found id: ""
	I0729 14:40:18.814026 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.814039 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:18.814051 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:18.814116 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:18.851027 1039759 cri.go:89] found id: ""
	I0729 14:40:18.851058 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.851069 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:18.851076 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:18.851151 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:17.796245 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:20.293964 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:18.673560 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:21.173099 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:18.207376 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:20.707629 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:18.903888 1039759 cri.go:89] found id: ""
	I0729 14:40:18.903920 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.903932 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:18.903941 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:18.904002 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:18.938756 1039759 cri.go:89] found id: ""
	I0729 14:40:18.938784 1039759 logs.go:276] 0 containers: []
	W0729 14:40:18.938791 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:18.938801 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:18.938814 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:18.988482 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:18.988520 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:19.002145 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:19.002177 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:19.085372 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:19.085397 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:19.085424 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:19.171294 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:19.171387 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:21.709578 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:21.722874 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:21.722941 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:21.768110 1039759 cri.go:89] found id: ""
	I0729 14:40:21.768139 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.768150 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:21.768156 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:21.768210 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:21.808853 1039759 cri.go:89] found id: ""
	I0729 14:40:21.808886 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.808897 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:21.808905 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:21.808974 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:21.843432 1039759 cri.go:89] found id: ""
	I0729 14:40:21.843472 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.843484 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:21.843493 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:21.843576 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:21.876497 1039759 cri.go:89] found id: ""
	I0729 14:40:21.876535 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.876547 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:21.876555 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:21.876633 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:21.911528 1039759 cri.go:89] found id: ""
	I0729 14:40:21.911556 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.911565 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:21.911571 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:21.911626 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:21.944514 1039759 cri.go:89] found id: ""
	I0729 14:40:21.944548 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.944560 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:21.944569 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:21.944641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:21.978113 1039759 cri.go:89] found id: ""
	I0729 14:40:21.978151 1039759 logs.go:276] 0 containers: []
	W0729 14:40:21.978162 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:21.978170 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:21.978233 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:22.012390 1039759 cri.go:89] found id: ""
	I0729 14:40:22.012438 1039759 logs.go:276] 0 containers: []
	W0729 14:40:22.012449 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:22.012461 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:22.012484 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:22.027921 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:22.027952 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:22.095087 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:22.095115 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:22.095132 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:22.178462 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:22.178495 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:22.220155 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:22.220188 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:22.794431 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:25.295391 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:23.174050 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:25.673437 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:22.708012 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:25.207491 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:24.771932 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:24.784764 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:24.784851 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:24.820445 1039759 cri.go:89] found id: ""
	I0729 14:40:24.820473 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.820485 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:24.820501 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:24.820569 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:24.854753 1039759 cri.go:89] found id: ""
	I0729 14:40:24.854786 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.854796 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:24.854802 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:24.854856 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:24.889200 1039759 cri.go:89] found id: ""
	I0729 14:40:24.889230 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.889242 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:24.889250 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:24.889312 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:24.932383 1039759 cri.go:89] found id: ""
	I0729 14:40:24.932435 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.932447 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:24.932454 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:24.932515 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:24.971830 1039759 cri.go:89] found id: ""
	I0729 14:40:24.971859 1039759 logs.go:276] 0 containers: []
	W0729 14:40:24.971871 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:24.971879 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:24.971936 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:25.014336 1039759 cri.go:89] found id: ""
	I0729 14:40:25.014374 1039759 logs.go:276] 0 containers: []
	W0729 14:40:25.014386 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:25.014397 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:25.014464 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:25.048131 1039759 cri.go:89] found id: ""
	I0729 14:40:25.048161 1039759 logs.go:276] 0 containers: []
	W0729 14:40:25.048171 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:25.048177 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:25.048232 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:25.089830 1039759 cri.go:89] found id: ""
	I0729 14:40:25.089866 1039759 logs.go:276] 0 containers: []
	W0729 14:40:25.089878 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:25.089893 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:25.089907 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:25.172078 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:25.172113 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:25.221629 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:25.221661 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:25.291761 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:25.291806 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:25.314521 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:25.314559 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:25.402738 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:27.903335 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:27.918335 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:27.918413 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:27.951929 1039759 cri.go:89] found id: ""
	I0729 14:40:27.951955 1039759 logs.go:276] 0 containers: []
	W0729 14:40:27.951966 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:27.951972 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:27.952029 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:27.986229 1039759 cri.go:89] found id: ""
	I0729 14:40:27.986266 1039759 logs.go:276] 0 containers: []
	W0729 14:40:27.986279 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:27.986287 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:27.986352 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:28.019467 1039759 cri.go:89] found id: ""
	I0729 14:40:28.019504 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.019517 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:28.019524 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:28.019590 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:28.053762 1039759 cri.go:89] found id: ""
	I0729 14:40:28.053790 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.053799 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:28.053806 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:28.053858 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:28.088947 1039759 cri.go:89] found id: ""
	I0729 14:40:28.088975 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.088989 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:28.088996 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:28.089070 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:28.130018 1039759 cri.go:89] found id: ""
	I0729 14:40:28.130052 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.130064 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:28.130072 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:28.130143 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:28.163682 1039759 cri.go:89] found id: ""
	I0729 14:40:28.163715 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.163725 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:28.163734 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:28.163802 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:28.199698 1039759 cri.go:89] found id: ""
	I0729 14:40:28.199732 1039759 logs.go:276] 0 containers: []
	W0729 14:40:28.199744 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:28.199757 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:28.199774 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:28.253735 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:28.253776 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:28.267786 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:28.267825 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:28.337218 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:28.337250 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:28.337265 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:28.419644 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:28.419688 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:27.793963 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:30.293775 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:28.172846 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:30.173544 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:27.707884 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:29.708174 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:32.206661 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:30.958707 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:30.972073 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:30.972146 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:31.016629 1039759 cri.go:89] found id: ""
	I0729 14:40:31.016662 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.016673 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:31.016681 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:31.016747 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:31.058891 1039759 cri.go:89] found id: ""
	I0729 14:40:31.058921 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.058930 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:31.058936 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:31.059004 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:31.096599 1039759 cri.go:89] found id: ""
	I0729 14:40:31.096633 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.096645 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:31.096654 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:31.096741 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:31.143525 1039759 cri.go:89] found id: ""
	I0729 14:40:31.143554 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.143562 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:31.143568 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:31.143628 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:31.180188 1039759 cri.go:89] found id: ""
	I0729 14:40:31.180220 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.180230 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:31.180239 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:31.180310 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:31.219995 1039759 cri.go:89] found id: ""
	I0729 14:40:31.220026 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.220037 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:31.220045 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:31.220108 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:31.254137 1039759 cri.go:89] found id: ""
	I0729 14:40:31.254182 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.254192 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:31.254200 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:31.254272 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:31.288065 1039759 cri.go:89] found id: ""
	I0729 14:40:31.288098 1039759 logs.go:276] 0 containers: []
	W0729 14:40:31.288109 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:31.288122 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:31.288137 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:31.341299 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:31.341338 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:31.355357 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:31.355387 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:31.427414 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:31.427439 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:31.427453 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:31.508372 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:31.508439 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:32.294256 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:34.295131 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:32.174315 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:34.674462 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:34.208183 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:36.707763 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:34.052770 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:34.066300 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:34.066366 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:34.104242 1039759 cri.go:89] found id: ""
	I0729 14:40:34.104278 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.104290 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:34.104299 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:34.104367 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:34.143092 1039759 cri.go:89] found id: ""
	I0729 14:40:34.143125 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.143137 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:34.143145 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:34.143216 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:34.177966 1039759 cri.go:89] found id: ""
	I0729 14:40:34.177993 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.178002 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:34.178011 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:34.178098 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:34.218325 1039759 cri.go:89] found id: ""
	I0729 14:40:34.218351 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.218361 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:34.218369 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:34.218437 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:34.256632 1039759 cri.go:89] found id: ""
	I0729 14:40:34.256665 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.256675 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:34.256683 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:34.256753 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:34.290713 1039759 cri.go:89] found id: ""
	I0729 14:40:34.290739 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.290747 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:34.290753 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:34.290816 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:34.331345 1039759 cri.go:89] found id: ""
	I0729 14:40:34.331378 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.331389 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:34.331397 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:34.331468 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:34.370184 1039759 cri.go:89] found id: ""
	I0729 14:40:34.370214 1039759 logs.go:276] 0 containers: []
	W0729 14:40:34.370226 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:34.370239 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:34.370256 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:34.448667 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:34.448709 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:34.492943 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:34.492974 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:34.548784 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:34.548827 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:34.565353 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:34.565389 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:34.639411 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:37.140039 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:37.153732 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:37.153806 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:37.189360 1039759 cri.go:89] found id: ""
	I0729 14:40:37.189389 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.189398 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:37.189404 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:37.189474 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:37.225790 1039759 cri.go:89] found id: ""
	I0729 14:40:37.225820 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.225831 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:37.225839 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:37.225914 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:37.261742 1039759 cri.go:89] found id: ""
	I0729 14:40:37.261772 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.261782 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:37.261791 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:37.261862 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:37.295791 1039759 cri.go:89] found id: ""
	I0729 14:40:37.295826 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.295835 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:37.295843 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:37.295908 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:37.331290 1039759 cri.go:89] found id: ""
	I0729 14:40:37.331324 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.331334 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:37.331343 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:37.331413 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:37.366150 1039759 cri.go:89] found id: ""
	I0729 14:40:37.366183 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.366195 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:37.366203 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:37.366273 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:37.400983 1039759 cri.go:89] found id: ""
	I0729 14:40:37.401019 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.401030 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:37.401038 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:37.401110 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:37.435333 1039759 cri.go:89] found id: ""
	I0729 14:40:37.435368 1039759 logs.go:276] 0 containers: []
	W0729 14:40:37.435379 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:37.435391 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:37.435407 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:37.488020 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:37.488057 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:37.501543 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:37.501573 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:37.576006 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:37.576033 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:37.576050 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:37.658600 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:37.658641 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:36.794615 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:38.795414 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:37.175174 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:39.674361 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:39.207946 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:41.707724 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:40.200763 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:40.216048 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:40.216121 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:40.253969 1039759 cri.go:89] found id: ""
	I0729 14:40:40.253996 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.254005 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:40.254012 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:40.254078 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:40.289557 1039759 cri.go:89] found id: ""
	I0729 14:40:40.289595 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.289608 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:40.289616 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:40.289698 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:40.329756 1039759 cri.go:89] found id: ""
	I0729 14:40:40.329799 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.329823 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:40.329833 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:40.329906 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:40.365281 1039759 cri.go:89] found id: ""
	I0729 14:40:40.365315 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.365327 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:40.365335 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:40.365403 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:40.401300 1039759 cri.go:89] found id: ""
	I0729 14:40:40.401327 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.401336 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:40.401342 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:40.401398 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:40.435679 1039759 cri.go:89] found id: ""
	I0729 14:40:40.435710 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.435719 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:40.435726 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:40.435781 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:40.475825 1039759 cri.go:89] found id: ""
	I0729 14:40:40.475851 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.475859 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:40.475866 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:40.475926 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:40.512153 1039759 cri.go:89] found id: ""
	I0729 14:40:40.512184 1039759 logs.go:276] 0 containers: []
	W0729 14:40:40.512191 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:40.512202 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:40.512215 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:40.563983 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:40.564022 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:40.578823 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:40.578853 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:40.650282 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:40.650311 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:40.650328 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:40.734933 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:40.734980 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:43.280095 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:43.294284 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:43.294361 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:43.328862 1039759 cri.go:89] found id: ""
	I0729 14:40:43.328890 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.328899 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:43.328905 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:43.328971 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:43.366321 1039759 cri.go:89] found id: ""
	I0729 14:40:43.366364 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.366376 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:43.366384 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:43.366459 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:43.400189 1039759 cri.go:89] found id: ""
	I0729 14:40:43.400220 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.400229 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:43.400235 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:43.400299 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:43.438521 1039759 cri.go:89] found id: ""
	I0729 14:40:43.438562 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.438582 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:43.438594 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:43.438665 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:43.473931 1039759 cri.go:89] found id: ""
	I0729 14:40:43.473958 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.473966 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:43.473972 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:43.474035 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:43.511460 1039759 cri.go:89] found id: ""
	I0729 14:40:43.511490 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.511497 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:43.511506 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:43.511563 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:43.547255 1039759 cri.go:89] found id: ""
	I0729 14:40:43.547290 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.547301 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:43.547309 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:43.547375 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:43.582384 1039759 cri.go:89] found id: ""
	I0729 14:40:43.582418 1039759 logs.go:276] 0 containers: []
	W0729 14:40:43.582428 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:43.582441 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:43.582459 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:43.595747 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:43.595780 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:43.665389 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:43.665413 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:43.665427 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:43.752669 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:43.752712 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:43.797239 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:43.797272 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:41.294242 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:43.294985 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:45.794449 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:42.173495 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:44.173830 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:44.207593 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:46.706855 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:46.352841 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:46.368204 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:46.368278 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:46.406661 1039759 cri.go:89] found id: ""
	I0729 14:40:46.406687 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.406695 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:46.406701 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:46.406761 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:46.443728 1039759 cri.go:89] found id: ""
	I0729 14:40:46.443760 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.443771 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:46.443778 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:46.443845 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:46.477632 1039759 cri.go:89] found id: ""
	I0729 14:40:46.477666 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.477677 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:46.477686 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:46.477754 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:46.512510 1039759 cri.go:89] found id: ""
	I0729 14:40:46.512538 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.512549 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:46.512557 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:46.512629 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:46.550803 1039759 cri.go:89] found id: ""
	I0729 14:40:46.550834 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.550843 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:46.550848 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:46.550914 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:46.591610 1039759 cri.go:89] found id: ""
	I0729 14:40:46.591640 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.591652 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:46.591661 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:46.591723 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:46.631090 1039759 cri.go:89] found id: ""
	I0729 14:40:46.631122 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.631132 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:46.631139 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:46.631199 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:46.670510 1039759 cri.go:89] found id: ""
	I0729 14:40:46.670542 1039759 logs.go:276] 0 containers: []
	W0729 14:40:46.670554 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:46.670573 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:46.670590 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:46.725560 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:46.725594 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:46.739348 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:46.739372 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:46.812850 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:46.812874 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:46.812892 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:46.892922 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:46.892964 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:47.795538 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:50.293685 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:46.674514 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:49.174577 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:48.708243 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:51.207168 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:49.438741 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:49.452505 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:49.452588 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:49.487294 1039759 cri.go:89] found id: ""
	I0729 14:40:49.487323 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.487331 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:49.487340 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:49.487407 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:49.521783 1039759 cri.go:89] found id: ""
	I0729 14:40:49.521816 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.521828 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:49.521836 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:49.521901 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:49.557039 1039759 cri.go:89] found id: ""
	I0729 14:40:49.557075 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.557086 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:49.557094 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:49.557162 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:49.590431 1039759 cri.go:89] found id: ""
	I0729 14:40:49.590462 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.590474 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:49.590494 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:49.590574 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:49.626230 1039759 cri.go:89] found id: ""
	I0729 14:40:49.626260 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.626268 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:49.626274 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:49.626339 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:49.662030 1039759 cri.go:89] found id: ""
	I0729 14:40:49.662060 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.662068 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:49.662075 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:49.662130 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:49.699988 1039759 cri.go:89] found id: ""
	I0729 14:40:49.700019 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.700035 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:49.700076 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:49.700144 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:49.736830 1039759 cri.go:89] found id: ""
	I0729 14:40:49.736864 1039759 logs.go:276] 0 containers: []
	W0729 14:40:49.736873 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:49.736882 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:49.736895 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:49.775670 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:49.775703 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:49.830820 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:49.830853 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:49.846374 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:49.846407 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:49.917475 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:49.917502 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:49.917520 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:52.499291 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:52.513571 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:52.513641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:52.547437 1039759 cri.go:89] found id: ""
	I0729 14:40:52.547474 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.547487 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:52.547495 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:52.547559 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:52.587664 1039759 cri.go:89] found id: ""
	I0729 14:40:52.587705 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.587718 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:52.587726 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:52.587799 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:52.630642 1039759 cri.go:89] found id: ""
	I0729 14:40:52.630670 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.630678 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:52.630684 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:52.630740 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:52.665978 1039759 cri.go:89] found id: ""
	I0729 14:40:52.666010 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.666022 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:52.666030 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:52.666103 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:52.701111 1039759 cri.go:89] found id: ""
	I0729 14:40:52.701140 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.701148 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:52.701155 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:52.701211 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:52.744219 1039759 cri.go:89] found id: ""
	I0729 14:40:52.744247 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.744257 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:52.744265 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:52.744329 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:52.781081 1039759 cri.go:89] found id: ""
	I0729 14:40:52.781113 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.781122 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:52.781128 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:52.781198 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:52.817938 1039759 cri.go:89] found id: ""
	I0729 14:40:52.817974 1039759 logs.go:276] 0 containers: []
	W0729 14:40:52.817985 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:52.817999 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:52.818016 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:52.895387 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:52.895416 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:52.895433 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:52.976313 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:52.976356 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:53.013814 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:53.013852 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:53.065901 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:53.065937 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:52.798083 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:55.293459 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:51.674103 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:54.174456 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:53.208082 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:55.707719 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:55.580590 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:55.595023 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:55.595108 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:55.631449 1039759 cri.go:89] found id: ""
	I0729 14:40:55.631479 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.631487 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:55.631494 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:55.631551 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:55.666245 1039759 cri.go:89] found id: ""
	I0729 14:40:55.666274 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.666283 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:55.666296 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:55.666364 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:55.706582 1039759 cri.go:89] found id: ""
	I0729 14:40:55.706611 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.706621 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:55.706629 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:55.706696 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:55.741930 1039759 cri.go:89] found id: ""
	I0729 14:40:55.741962 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.741973 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:55.741990 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:55.742058 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:55.781440 1039759 cri.go:89] found id: ""
	I0729 14:40:55.781475 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.781486 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:55.781494 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:55.781599 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:55.825329 1039759 cri.go:89] found id: ""
	I0729 14:40:55.825366 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.825377 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:55.825387 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:55.825466 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:55.860834 1039759 cri.go:89] found id: ""
	I0729 14:40:55.860866 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.860878 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:55.860886 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:55.860950 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:55.895460 1039759 cri.go:89] found id: ""
	I0729 14:40:55.895492 1039759 logs.go:276] 0 containers: []
	W0729 14:40:55.895502 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:55.895514 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:55.895531 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:55.951739 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:55.951781 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:55.965760 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:55.965792 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:56.044422 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:56.044458 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:56.044477 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:40:56.123669 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:56.123714 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:58.668279 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:40:58.682912 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:40:58.682974 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:40:58.718757 1039759 cri.go:89] found id: ""
	I0729 14:40:58.718787 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.718798 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:40:58.718807 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:40:58.718861 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:40:58.756986 1039759 cri.go:89] found id: ""
	I0729 14:40:58.757015 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.757025 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:40:58.757031 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:40:58.757092 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:40:58.797572 1039759 cri.go:89] found id: ""
	I0729 14:40:58.797600 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.797611 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:40:58.797620 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:40:58.797689 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:40:58.839410 1039759 cri.go:89] found id: ""
	I0729 14:40:58.839442 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.839453 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:40:58.839461 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:40:58.839523 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:40:57.293935 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:59.294805 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:56.673078 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:58.674177 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:01.173709 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:57.708051 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:00.207822 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:02.208128 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:40:58.874477 1039759 cri.go:89] found id: ""
	I0729 14:40:58.874508 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.874519 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:40:58.874528 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:40:58.874602 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:40:58.910248 1039759 cri.go:89] found id: ""
	I0729 14:40:58.910281 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.910296 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:40:58.910307 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:40:58.910368 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:40:58.944845 1039759 cri.go:89] found id: ""
	I0729 14:40:58.944879 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.944890 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:40:58.944896 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:40:58.944955 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:40:58.978818 1039759 cri.go:89] found id: ""
	I0729 14:40:58.978854 1039759 logs.go:276] 0 containers: []
	W0729 14:40:58.978867 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:40:58.978879 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:40:58.978898 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:40:59.018961 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:40:59.018993 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:40:59.069883 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:40:59.069920 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:40:59.083277 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:40:59.083304 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:40:59.159470 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:40:59.159494 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:40:59.159511 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:01.746915 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:01.759883 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:01.759949 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:01.796563 1039759 cri.go:89] found id: ""
	I0729 14:41:01.796589 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.796602 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:01.796608 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:01.796691 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:01.831464 1039759 cri.go:89] found id: ""
	I0729 14:41:01.831499 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.831511 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:01.831520 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:01.831586 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:01.868633 1039759 cri.go:89] found id: ""
	I0729 14:41:01.868660 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.868668 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:01.868674 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:01.868732 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:01.903154 1039759 cri.go:89] found id: ""
	I0729 14:41:01.903183 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.903194 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:01.903202 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:01.903272 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:01.938256 1039759 cri.go:89] found id: ""
	I0729 14:41:01.938292 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.938304 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:01.938312 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:01.938384 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:01.978117 1039759 cri.go:89] found id: ""
	I0729 14:41:01.978147 1039759 logs.go:276] 0 containers: []
	W0729 14:41:01.978159 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:01.978168 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:01.978242 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:02.014061 1039759 cri.go:89] found id: ""
	I0729 14:41:02.014089 1039759 logs.go:276] 0 containers: []
	W0729 14:41:02.014100 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:02.014108 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:02.014176 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:02.050133 1039759 cri.go:89] found id: ""
	I0729 14:41:02.050165 1039759 logs.go:276] 0 containers: []
	W0729 14:41:02.050177 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:02.050189 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:02.050206 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:02.101188 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:02.101253 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:02.114343 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:02.114369 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:02.190309 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:02.190338 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:02.190354 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:02.266895 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:02.266939 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:01.794976 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:04.295199 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:03.176713 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:05.673543 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:04.708032 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:07.207702 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:04.809474 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:04.824652 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:04.824725 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:04.858442 1039759 cri.go:89] found id: ""
	I0729 14:41:04.858474 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.858483 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:04.858490 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:04.858542 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:04.893199 1039759 cri.go:89] found id: ""
	I0729 14:41:04.893229 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.893237 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:04.893243 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:04.893297 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:04.929480 1039759 cri.go:89] found id: ""
	I0729 14:41:04.929512 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.929524 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:04.929532 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:04.929601 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:04.965097 1039759 cri.go:89] found id: ""
	I0729 14:41:04.965127 1039759 logs.go:276] 0 containers: []
	W0729 14:41:04.965139 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:04.965147 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:04.965228 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:05.003419 1039759 cri.go:89] found id: ""
	I0729 14:41:05.003449 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.003460 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:05.003467 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:05.003557 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:05.037408 1039759 cri.go:89] found id: ""
	I0729 14:41:05.037439 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.037451 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:05.037458 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:05.037527 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:05.072909 1039759 cri.go:89] found id: ""
	I0729 14:41:05.072942 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.072953 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:05.072961 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:05.073034 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:05.123731 1039759 cri.go:89] found id: ""
	I0729 14:41:05.123764 1039759 logs.go:276] 0 containers: []
	W0729 14:41:05.123776 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:05.123787 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:05.123802 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:05.188687 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:05.188732 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:05.204119 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:05.204160 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:05.294702 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:05.294732 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:05.294750 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:05.377412 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:05.377456 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:07.923437 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:07.937633 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:07.937711 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:07.976813 1039759 cri.go:89] found id: ""
	I0729 14:41:07.976850 1039759 logs.go:276] 0 containers: []
	W0729 14:41:07.976861 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:07.976872 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:07.976946 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:08.013051 1039759 cri.go:89] found id: ""
	I0729 14:41:08.013089 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.013100 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:08.013109 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:08.013177 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:08.047372 1039759 cri.go:89] found id: ""
	I0729 14:41:08.047404 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.047413 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:08.047420 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:08.047477 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:08.080555 1039759 cri.go:89] found id: ""
	I0729 14:41:08.080594 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.080607 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:08.080615 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:08.080684 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:08.117054 1039759 cri.go:89] found id: ""
	I0729 14:41:08.117087 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.117098 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:08.117106 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:08.117175 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:08.152270 1039759 cri.go:89] found id: ""
	I0729 14:41:08.152295 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.152303 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:08.152309 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:08.152373 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:08.188804 1039759 cri.go:89] found id: ""
	I0729 14:41:08.188830 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.188842 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:08.188848 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:08.188903 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:08.225101 1039759 cri.go:89] found id: ""
	I0729 14:41:08.225139 1039759 logs.go:276] 0 containers: []
	W0729 14:41:08.225151 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:08.225164 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:08.225182 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:08.278721 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:08.278759 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:08.293417 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:08.293453 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:08.371802 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:08.371825 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:08.371843 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:08.452233 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:08.452274 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:06.795598 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:09.294006 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:08.175147 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:10.673937 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:09.707777 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:12.208180 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:10.993379 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:11.007599 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:11.007668 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:11.045603 1039759 cri.go:89] found id: ""
	I0729 14:41:11.045652 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.045675 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:11.045683 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:11.045746 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:11.079682 1039759 cri.go:89] found id: ""
	I0729 14:41:11.079711 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.079722 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:11.079730 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:11.079797 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:11.122138 1039759 cri.go:89] found id: ""
	I0729 14:41:11.122167 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.122180 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:11.122185 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:11.122249 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:11.157416 1039759 cri.go:89] found id: ""
	I0729 14:41:11.157444 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.157452 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:11.157458 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:11.157514 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:11.198589 1039759 cri.go:89] found id: ""
	I0729 14:41:11.198631 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.198643 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:11.198652 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:11.198725 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:11.238329 1039759 cri.go:89] found id: ""
	I0729 14:41:11.238360 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.238369 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:11.238376 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:11.238442 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:11.273283 1039759 cri.go:89] found id: ""
	I0729 14:41:11.273313 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.273322 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:11.273328 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:11.273382 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:11.313927 1039759 cri.go:89] found id: ""
	I0729 14:41:11.313972 1039759 logs.go:276] 0 containers: []
	W0729 14:41:11.313984 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:11.313997 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:11.314014 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:11.366507 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:11.366546 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:11.380529 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:11.380566 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:11.451839 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:11.451862 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:11.451882 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:11.537109 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:11.537150 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:11.294967 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:13.793738 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:13.173482 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:15.673025 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:14.706708 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:16.707135 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:14.104794 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:14.117474 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:14.117541 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:14.154117 1039759 cri.go:89] found id: ""
	I0729 14:41:14.154151 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.154163 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:14.154171 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:14.154236 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:14.195762 1039759 cri.go:89] found id: ""
	I0729 14:41:14.195793 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.195804 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:14.195812 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:14.195875 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:14.231434 1039759 cri.go:89] found id: ""
	I0729 14:41:14.231460 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.231467 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:14.231474 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:14.231523 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:14.264802 1039759 cri.go:89] found id: ""
	I0729 14:41:14.264839 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.264851 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:14.264859 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:14.264932 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:14.300162 1039759 cri.go:89] found id: ""
	I0729 14:41:14.300184 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.300194 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:14.300202 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:14.300262 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:14.335351 1039759 cri.go:89] found id: ""
	I0729 14:41:14.335385 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.335396 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:14.335404 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:14.335468 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:14.370064 1039759 cri.go:89] found id: ""
	I0729 14:41:14.370096 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.370107 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:14.370115 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:14.370184 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:14.406506 1039759 cri.go:89] found id: ""
	I0729 14:41:14.406538 1039759 logs.go:276] 0 containers: []
	W0729 14:41:14.406549 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:14.406562 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:14.406579 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:14.445641 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:14.445681 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:14.496132 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:14.496165 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:14.509732 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:14.509767 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:14.581519 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:14.581541 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:14.581558 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:17.164487 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:17.178359 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:17.178447 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:17.213780 1039759 cri.go:89] found id: ""
	I0729 14:41:17.213869 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.213887 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:17.213896 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:17.213966 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:17.251006 1039759 cri.go:89] found id: ""
	I0729 14:41:17.251045 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.251056 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:17.251063 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:17.251135 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:17.306624 1039759 cri.go:89] found id: ""
	I0729 14:41:17.306654 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.306683 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:17.306691 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:17.306775 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:17.358882 1039759 cri.go:89] found id: ""
	I0729 14:41:17.358915 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.358927 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:17.358935 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:17.359008 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:17.408592 1039759 cri.go:89] found id: ""
	I0729 14:41:17.408620 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.408636 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:17.408642 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:17.408705 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:17.445201 1039759 cri.go:89] found id: ""
	I0729 14:41:17.445228 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.445236 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:17.445242 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:17.445305 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:17.477441 1039759 cri.go:89] found id: ""
	I0729 14:41:17.477483 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.477511 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:17.477518 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:17.477591 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:17.509148 1039759 cri.go:89] found id: ""
	I0729 14:41:17.509179 1039759 logs.go:276] 0 containers: []
	W0729 14:41:17.509190 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:17.509203 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:17.509220 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:17.559784 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:17.559823 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:17.574163 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:17.574199 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:17.644249 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:17.644277 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:17.644294 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:17.720652 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:17.720688 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:16.293977 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:18.793489 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:20.793760 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:17.674099 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:20.173742 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:18.707238 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:21.209948 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:20.261591 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:20.274649 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:20.274731 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:20.311561 1039759 cri.go:89] found id: ""
	I0729 14:41:20.311591 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.311600 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:20.311606 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:20.311668 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:20.350267 1039759 cri.go:89] found id: ""
	I0729 14:41:20.350300 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.350313 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:20.350322 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:20.350379 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:20.384183 1039759 cri.go:89] found id: ""
	I0729 14:41:20.384213 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.384220 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:20.384227 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:20.384288 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:20.422330 1039759 cri.go:89] found id: ""
	I0729 14:41:20.422358 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.422367 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:20.422373 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:20.422442 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:20.465537 1039759 cri.go:89] found id: ""
	I0729 14:41:20.465568 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.465577 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:20.465586 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:20.465663 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:20.507661 1039759 cri.go:89] found id: ""
	I0729 14:41:20.507691 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.507701 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:20.507710 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:20.507774 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:20.545830 1039759 cri.go:89] found id: ""
	I0729 14:41:20.545857 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.545866 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:20.545872 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:20.545936 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:20.586311 1039759 cri.go:89] found id: ""
	I0729 14:41:20.586345 1039759 logs.go:276] 0 containers: []
	W0729 14:41:20.586354 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:20.586364 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:20.586379 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:20.635183 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:20.635224 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:20.649660 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:20.649701 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:20.729588 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:20.729613 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:20.729632 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:20.811565 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:20.811605 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:23.354318 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:23.367784 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:23.367862 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:23.401929 1039759 cri.go:89] found id: ""
	I0729 14:41:23.401956 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.401965 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:23.401970 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:23.402033 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:23.437130 1039759 cri.go:89] found id: ""
	I0729 14:41:23.437161 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.437185 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:23.437205 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:23.437267 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:23.474029 1039759 cri.go:89] found id: ""
	I0729 14:41:23.474066 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.474078 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:23.474087 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:23.474159 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:23.506678 1039759 cri.go:89] found id: ""
	I0729 14:41:23.506714 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.506725 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:23.506732 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:23.506791 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:23.541578 1039759 cri.go:89] found id: ""
	I0729 14:41:23.541618 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.541628 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:23.541636 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:23.541709 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:23.575852 1039759 cri.go:89] found id: ""
	I0729 14:41:23.575883 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.575891 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:23.575898 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:23.575955 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:23.610611 1039759 cri.go:89] found id: ""
	I0729 14:41:23.610638 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.610646 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:23.610653 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:23.610717 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:23.650403 1039759 cri.go:89] found id: ""
	I0729 14:41:23.650429 1039759 logs.go:276] 0 containers: []
	W0729 14:41:23.650438 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:23.650448 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:23.650460 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:23.701856 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:23.701899 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:23.716925 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:23.716958 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:23.790678 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:23.790699 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:23.790717 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:23.873204 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:23.873242 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:22.794021 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:25.294289 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:22.173787 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:24.673139 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:23.708892 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:26.207121 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:26.414319 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:26.428069 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:26.428152 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:26.462538 1039759 cri.go:89] found id: ""
	I0729 14:41:26.462578 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.462590 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:26.462599 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:26.462687 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:26.496461 1039759 cri.go:89] found id: ""
	I0729 14:41:26.496501 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.496513 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:26.496521 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:26.496593 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:26.534152 1039759 cri.go:89] found id: ""
	I0729 14:41:26.534190 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.534203 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:26.534210 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:26.534273 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:26.572986 1039759 cri.go:89] found id: ""
	I0729 14:41:26.573016 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.573024 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:26.573030 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:26.573097 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:26.607330 1039759 cri.go:89] found id: ""
	I0729 14:41:26.607359 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.607370 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:26.607378 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:26.607445 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:26.643023 1039759 cri.go:89] found id: ""
	I0729 14:41:26.643056 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.643067 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:26.643078 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:26.643145 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:26.679820 1039759 cri.go:89] found id: ""
	I0729 14:41:26.679846 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.679856 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:26.679865 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:26.679930 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:26.716433 1039759 cri.go:89] found id: ""
	I0729 14:41:26.716462 1039759 logs.go:276] 0 containers: []
	W0729 14:41:26.716470 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:26.716480 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:26.716494 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:26.794508 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:26.794529 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:26.794542 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:26.876663 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:26.876701 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:26.917309 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:26.917343 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:26.969397 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:26.969436 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:27.294711 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:29.793946 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:26.679220 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:29.173259 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:31.175213 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:28.207613 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:30.707297 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:29.483935 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:29.497502 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:29.497585 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:29.532671 1039759 cri.go:89] found id: ""
	I0729 14:41:29.532698 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.532712 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:29.532719 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:29.532784 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:29.568058 1039759 cri.go:89] found id: ""
	I0729 14:41:29.568085 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.568096 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:29.568103 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:29.568176 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:29.601173 1039759 cri.go:89] found id: ""
	I0729 14:41:29.601206 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.601216 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:29.601225 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:29.601284 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:29.634333 1039759 cri.go:89] found id: ""
	I0729 14:41:29.634372 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.634384 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:29.634393 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:29.634460 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:29.669669 1039759 cri.go:89] found id: ""
	I0729 14:41:29.669698 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.669706 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:29.669712 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:29.669777 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:29.702847 1039759 cri.go:89] found id: ""
	I0729 14:41:29.702876 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.702886 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:29.702894 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:29.702960 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:29.740713 1039759 cri.go:89] found id: ""
	I0729 14:41:29.740743 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.740754 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:29.740762 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:29.740846 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:29.777795 1039759 cri.go:89] found id: ""
	I0729 14:41:29.777829 1039759 logs.go:276] 0 containers: []
	W0729 14:41:29.777841 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:29.777853 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:29.777869 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:29.858713 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:29.858758 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:29.896873 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:29.896914 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:29.946905 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:29.946945 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:29.960136 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:29.960170 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:30.035951 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:32.536130 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:32.549431 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:32.549501 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:32.586069 1039759 cri.go:89] found id: ""
	I0729 14:41:32.586098 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.586117 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:32.586125 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:32.586183 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:32.623094 1039759 cri.go:89] found id: ""
	I0729 14:41:32.623123 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.623132 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:32.623138 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:32.623205 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:32.658370 1039759 cri.go:89] found id: ""
	I0729 14:41:32.658406 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.658418 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:32.658426 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:32.658492 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:32.696436 1039759 cri.go:89] found id: ""
	I0729 14:41:32.696469 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.696478 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:32.696484 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:32.696551 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:32.731306 1039759 cri.go:89] found id: ""
	I0729 14:41:32.731340 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.731352 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:32.731361 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:32.731431 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:32.767049 1039759 cri.go:89] found id: ""
	I0729 14:41:32.767087 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.767098 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:32.767106 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:32.767179 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:32.805094 1039759 cri.go:89] found id: ""
	I0729 14:41:32.805126 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.805138 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:32.805147 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:32.805223 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:32.840088 1039759 cri.go:89] found id: ""
	I0729 14:41:32.840116 1039759 logs.go:276] 0 containers: []
	W0729 14:41:32.840125 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:32.840137 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:32.840155 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:32.854065 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:32.854095 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:32.921447 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:32.921477 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:32.921493 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:33.005086 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:33.005129 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:33.042555 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:33.042617 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:31.795000 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:34.293349 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:33.673734 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:35.674275 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:32.707849 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:35.210238 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:35.593173 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:35.605965 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:35.606031 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:35.639315 1039759 cri.go:89] found id: ""
	I0729 14:41:35.639355 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.639367 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:35.639374 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:35.639466 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:35.678657 1039759 cri.go:89] found id: ""
	I0729 14:41:35.678686 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.678695 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:35.678700 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:35.678764 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:35.714108 1039759 cri.go:89] found id: ""
	I0729 14:41:35.714136 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.714147 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:35.714155 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:35.714220 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:35.748793 1039759 cri.go:89] found id: ""
	I0729 14:41:35.748820 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.748831 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:35.748837 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:35.748891 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:35.788853 1039759 cri.go:89] found id: ""
	I0729 14:41:35.788884 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.788895 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:35.788903 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:35.788971 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:35.825032 1039759 cri.go:89] found id: ""
	I0729 14:41:35.825059 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.825067 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:35.825074 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:35.825126 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:35.859990 1039759 cri.go:89] found id: ""
	I0729 14:41:35.860022 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.860033 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:35.860041 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:35.860131 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:35.894318 1039759 cri.go:89] found id: ""
	I0729 14:41:35.894352 1039759 logs.go:276] 0 containers: []
	W0729 14:41:35.894364 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:35.894377 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:35.894393 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:35.907591 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:35.907617 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:35.975000 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:35.975023 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:35.975040 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:36.056188 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:36.056226 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:36.094569 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:36.094606 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:38.648685 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:38.661546 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:38.661612 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:38.698658 1039759 cri.go:89] found id: ""
	I0729 14:41:38.698692 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.698704 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:38.698711 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:38.698797 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:38.731239 1039759 cri.go:89] found id: ""
	I0729 14:41:38.731274 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.731282 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:38.731288 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:38.731341 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:38.766549 1039759 cri.go:89] found id: ""
	I0729 14:41:38.766583 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.766594 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:38.766602 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:38.766663 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:38.803347 1039759 cri.go:89] found id: ""
	I0729 14:41:38.803374 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.803385 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:38.803393 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:38.803467 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:38.840327 1039759 cri.go:89] found id: ""
	I0729 14:41:38.840363 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.840374 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:38.840384 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:38.840480 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:38.874181 1039759 cri.go:89] found id: ""
	I0729 14:41:38.874211 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.874219 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:38.874225 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:38.874293 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:36.297301 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:38.794975 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:38.173718 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:40.675880 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:37.707171 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:39.709125 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:42.206569 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:38.908642 1039759 cri.go:89] found id: ""
	I0729 14:41:38.908674 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.908686 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:38.908694 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:38.908762 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:38.945081 1039759 cri.go:89] found id: ""
	I0729 14:41:38.945107 1039759 logs.go:276] 0 containers: []
	W0729 14:41:38.945116 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:38.945126 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:38.945140 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:38.999792 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:38.999826 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:39.013396 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:39.013421 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:39.077975 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:39.077998 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:39.078016 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:39.169606 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:39.169654 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:41.716258 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:41.730508 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:41.730579 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:41.766457 1039759 cri.go:89] found id: ""
	I0729 14:41:41.766490 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.766498 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:41.766505 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:41.766571 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:41.801073 1039759 cri.go:89] found id: ""
	I0729 14:41:41.801099 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.801109 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:41.801117 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:41.801178 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:41.836962 1039759 cri.go:89] found id: ""
	I0729 14:41:41.836986 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.836997 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:41.837005 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:41.837072 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:41.870169 1039759 cri.go:89] found id: ""
	I0729 14:41:41.870195 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.870205 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:41.870213 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:41.870274 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:41.902298 1039759 cri.go:89] found id: ""
	I0729 14:41:41.902323 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.902331 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:41.902337 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:41.902387 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:41.935394 1039759 cri.go:89] found id: ""
	I0729 14:41:41.935429 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.935441 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:41.935449 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:41.935513 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:41.972397 1039759 cri.go:89] found id: ""
	I0729 14:41:41.972437 1039759 logs.go:276] 0 containers: []
	W0729 14:41:41.972448 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:41.972456 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:41.972525 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:42.006477 1039759 cri.go:89] found id: ""
	I0729 14:41:42.006503 1039759 logs.go:276] 0 containers: []
	W0729 14:41:42.006513 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:42.006526 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:42.006540 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:42.053853 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:42.053886 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:42.067143 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:42.067172 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:42.135406 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:42.135432 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:42.135449 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:42.212571 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:42.212603 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:41.293241 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:43.294160 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:45.793697 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:43.173087 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:45.174327 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:44.206854 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:46.707167 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:44.751283 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:44.764600 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:44.764688 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:44.800821 1039759 cri.go:89] found id: ""
	I0729 14:41:44.800850 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.800857 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:44.800863 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:44.800924 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:44.834638 1039759 cri.go:89] found id: ""
	I0729 14:41:44.834670 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.834680 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:44.834686 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:44.834744 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:44.870198 1039759 cri.go:89] found id: ""
	I0729 14:41:44.870225 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.870237 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:44.870245 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:44.870312 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:44.904588 1039759 cri.go:89] found id: ""
	I0729 14:41:44.904620 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.904631 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:44.904639 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:44.904713 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:44.939442 1039759 cri.go:89] found id: ""
	I0729 14:41:44.939467 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.939474 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:44.939480 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:44.939541 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:44.972771 1039759 cri.go:89] found id: ""
	I0729 14:41:44.972799 1039759 logs.go:276] 0 containers: []
	W0729 14:41:44.972808 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:44.972815 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:44.972888 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:45.007513 1039759 cri.go:89] found id: ""
	I0729 14:41:45.007540 1039759 logs.go:276] 0 containers: []
	W0729 14:41:45.007549 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:45.007557 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:45.007626 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:45.038752 1039759 cri.go:89] found id: ""
	I0729 14:41:45.038778 1039759 logs.go:276] 0 containers: []
	W0729 14:41:45.038787 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:45.038797 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:45.038821 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:45.089807 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:45.089838 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:45.103188 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:45.103221 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:45.174509 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:45.174532 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:45.174554 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:45.255288 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:45.255327 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:47.799207 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:47.814781 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:47.814866 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:47.855111 1039759 cri.go:89] found id: ""
	I0729 14:41:47.855143 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.855156 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:47.855164 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:47.855230 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:47.892542 1039759 cri.go:89] found id: ""
	I0729 14:41:47.892577 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.892589 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:47.892603 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:47.892674 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:47.933408 1039759 cri.go:89] found id: ""
	I0729 14:41:47.933439 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.933451 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:47.933458 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:47.933531 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:47.970397 1039759 cri.go:89] found id: ""
	I0729 14:41:47.970427 1039759 logs.go:276] 0 containers: []
	W0729 14:41:47.970439 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:47.970447 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:47.970514 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:48.006852 1039759 cri.go:89] found id: ""
	I0729 14:41:48.006880 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.006891 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:48.006899 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:48.006967 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:48.046766 1039759 cri.go:89] found id: ""
	I0729 14:41:48.046799 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.046811 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:48.046820 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:48.046893 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:48.084354 1039759 cri.go:89] found id: ""
	I0729 14:41:48.084380 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.084387 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:48.084393 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:48.084468 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:48.121526 1039759 cri.go:89] found id: ""
	I0729 14:41:48.121559 1039759 logs.go:276] 0 containers: []
	W0729 14:41:48.121571 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:48.121582 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:48.121606 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:48.136753 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:48.136784 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:48.206914 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:48.206942 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:48.206958 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:48.283843 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:48.283882 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:48.325845 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:48.325878 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:47.794096 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:50.295275 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:47.182903 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:49.672827 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:49.206572 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:51.206900 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:50.881346 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:50.894098 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:50.894177 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:50.927345 1039759 cri.go:89] found id: ""
	I0729 14:41:50.927375 1039759 logs.go:276] 0 containers: []
	W0729 14:41:50.927386 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:50.927399 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:50.927466 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:50.962700 1039759 cri.go:89] found id: ""
	I0729 14:41:50.962726 1039759 logs.go:276] 0 containers: []
	W0729 14:41:50.962734 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:50.962740 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:50.962804 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:50.997299 1039759 cri.go:89] found id: ""
	I0729 14:41:50.997334 1039759 logs.go:276] 0 containers: []
	W0729 14:41:50.997346 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:50.997354 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:50.997419 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:51.030157 1039759 cri.go:89] found id: ""
	I0729 14:41:51.030190 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.030202 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:51.030211 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:51.030288 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:51.063123 1039759 cri.go:89] found id: ""
	I0729 14:41:51.063151 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.063162 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:51.063170 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:51.063237 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:51.096772 1039759 cri.go:89] found id: ""
	I0729 14:41:51.096819 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.096830 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:51.096838 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:51.096912 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:51.131976 1039759 cri.go:89] found id: ""
	I0729 14:41:51.132004 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.132014 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:51.132022 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:51.132095 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:51.167560 1039759 cri.go:89] found id: ""
	I0729 14:41:51.167599 1039759 logs.go:276] 0 containers: []
	W0729 14:41:51.167610 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:51.167622 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:51.167640 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:51.229416 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:51.229455 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:51.243576 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:51.243604 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:51.311103 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:51.311123 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:51.311139 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:51.396369 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:51.396432 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:52.793981 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:55.294172 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:51.673945 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:54.173681 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:56.174098 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:53.207656 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:55.709310 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:53.942329 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:53.955960 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:53.956027 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:53.988039 1039759 cri.go:89] found id: ""
	I0729 14:41:53.988074 1039759 logs.go:276] 0 containers: []
	W0729 14:41:53.988085 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:53.988094 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:53.988162 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:54.020948 1039759 cri.go:89] found id: ""
	I0729 14:41:54.020981 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.020992 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:54.020999 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:54.021067 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:54.053716 1039759 cri.go:89] found id: ""
	I0729 14:41:54.053744 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.053752 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:54.053759 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:54.053811 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:54.092348 1039759 cri.go:89] found id: ""
	I0729 14:41:54.092378 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.092390 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:54.092398 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:54.092471 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:54.126114 1039759 cri.go:89] found id: ""
	I0729 14:41:54.126176 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.126189 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:54.126199 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:54.126316 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:54.162125 1039759 cri.go:89] found id: ""
	I0729 14:41:54.162157 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.162167 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:54.162174 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:54.162241 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:54.202407 1039759 cri.go:89] found id: ""
	I0729 14:41:54.202439 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.202448 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:54.202456 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:54.202522 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:54.238650 1039759 cri.go:89] found id: ""
	I0729 14:41:54.238684 1039759 logs.go:276] 0 containers: []
	W0729 14:41:54.238695 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:54.238704 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:54.238718 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:54.291200 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:54.291243 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:54.306381 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:54.306415 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:54.371355 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:54.371384 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:54.371399 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:54.455200 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:54.455237 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:56.994689 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:41:57.007893 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:41:57.007958 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:41:57.041775 1039759 cri.go:89] found id: ""
	I0729 14:41:57.041808 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.041820 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:41:57.041828 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:41:57.041894 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:41:57.075409 1039759 cri.go:89] found id: ""
	I0729 14:41:57.075442 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.075454 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:41:57.075462 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:41:57.075524 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:41:57.120963 1039759 cri.go:89] found id: ""
	I0729 14:41:57.121000 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.121011 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:41:57.121019 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:41:57.121088 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:41:57.164882 1039759 cri.go:89] found id: ""
	I0729 14:41:57.164912 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.164923 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:41:57.164932 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:41:57.165001 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:41:57.198511 1039759 cri.go:89] found id: ""
	I0729 14:41:57.198537 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.198545 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:41:57.198550 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:41:57.198604 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:41:57.238516 1039759 cri.go:89] found id: ""
	I0729 14:41:57.238544 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.238552 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:41:57.238559 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:41:57.238622 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:41:57.271823 1039759 cri.go:89] found id: ""
	I0729 14:41:57.271854 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.271865 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:41:57.271873 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:41:57.271937 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:41:57.308435 1039759 cri.go:89] found id: ""
	I0729 14:41:57.308460 1039759 logs.go:276] 0 containers: []
	W0729 14:41:57.308472 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:41:57.308483 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:41:57.308506 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:41:57.359783 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:41:57.359818 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:41:57.372669 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:41:57.372698 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:41:57.440979 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:41:57.441004 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:41:57.441018 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:41:57.520105 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:41:57.520139 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:41:57.295421 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:59.793704 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:58.673850 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:01.172547 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:41:58.207493 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:00.208108 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:02.208334 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:00.060542 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:00.076125 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:00.076192 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:00.113095 1039759 cri.go:89] found id: ""
	I0729 14:42:00.113129 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.113137 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:00.113150 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:00.113206 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:00.154104 1039759 cri.go:89] found id: ""
	I0729 14:42:00.154132 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.154139 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:00.154146 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:00.154202 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:00.190416 1039759 cri.go:89] found id: ""
	I0729 14:42:00.190443 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.190454 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:00.190462 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:00.190532 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:00.228138 1039759 cri.go:89] found id: ""
	I0729 14:42:00.228173 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.228185 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:00.228192 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:00.228261 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:00.265679 1039759 cri.go:89] found id: ""
	I0729 14:42:00.265706 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.265715 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:00.265721 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:00.265787 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:00.300283 1039759 cri.go:89] found id: ""
	I0729 14:42:00.300315 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.300333 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:00.300341 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:00.300433 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:00.339224 1039759 cri.go:89] found id: ""
	I0729 14:42:00.339255 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.339264 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:00.339270 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:00.339333 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:00.375780 1039759 cri.go:89] found id: ""
	I0729 14:42:00.375815 1039759 logs.go:276] 0 containers: []
	W0729 14:42:00.375826 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:00.375836 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:00.375851 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:00.425145 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:00.425190 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:00.438860 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:00.438891 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:00.512668 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:00.512695 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:00.512714 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:00.597083 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:00.597139 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:03.141962 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:03.156295 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:03.156372 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:03.192860 1039759 cri.go:89] found id: ""
	I0729 14:42:03.192891 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.192902 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:03.192911 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:03.192982 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:03.234078 1039759 cri.go:89] found id: ""
	I0729 14:42:03.234104 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.234113 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:03.234119 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:03.234171 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:03.268099 1039759 cri.go:89] found id: ""
	I0729 14:42:03.268124 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.268131 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:03.268138 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:03.268197 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:03.306470 1039759 cri.go:89] found id: ""
	I0729 14:42:03.306498 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.306507 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:03.306513 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:03.306596 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:03.341902 1039759 cri.go:89] found id: ""
	I0729 14:42:03.341933 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.341944 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:03.341952 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:03.342019 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:03.377235 1039759 cri.go:89] found id: ""
	I0729 14:42:03.377271 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.377282 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:03.377291 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:03.377355 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:03.411273 1039759 cri.go:89] found id: ""
	I0729 14:42:03.411308 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.411316 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:03.411322 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:03.411397 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:03.446482 1039759 cri.go:89] found id: ""
	I0729 14:42:03.446511 1039759 logs.go:276] 0 containers: []
	W0729 14:42:03.446519 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:03.446530 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:03.446545 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:03.460222 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:03.460262 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:03.548149 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:03.548175 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:03.548191 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:03.640563 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:03.640608 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:03.681685 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:03.681713 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:02.293412 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:04.793239 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:03.174082 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:05.674438 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:04.706798 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:06.707818 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:06.234967 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:06.249656 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:06.249726 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:06.284768 1039759 cri.go:89] found id: ""
	I0729 14:42:06.284798 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.284810 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:06.284822 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:06.284880 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:06.321109 1039759 cri.go:89] found id: ""
	I0729 14:42:06.321140 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.321150 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:06.321158 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:06.321229 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:06.357238 1039759 cri.go:89] found id: ""
	I0729 14:42:06.357269 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.357278 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:06.357284 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:06.357342 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:06.391613 1039759 cri.go:89] found id: ""
	I0729 14:42:06.391643 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.391653 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:06.391661 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:06.391726 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:06.428782 1039759 cri.go:89] found id: ""
	I0729 14:42:06.428813 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.428823 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:06.428831 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:06.428890 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:06.463558 1039759 cri.go:89] found id: ""
	I0729 14:42:06.463596 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.463607 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:06.463615 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:06.463683 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:06.500442 1039759 cri.go:89] found id: ""
	I0729 14:42:06.500474 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.500484 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:06.500501 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:06.500579 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:06.535589 1039759 cri.go:89] found id: ""
	I0729 14:42:06.535627 1039759 logs.go:276] 0 containers: []
	W0729 14:42:06.535638 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:06.535650 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:06.535668 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:06.584641 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:06.584676 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:06.597702 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:06.597737 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:06.664499 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:06.664537 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:06.664555 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:06.744808 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:06.744845 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:06.793853 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:09.294853 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:08.172993 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:10.174863 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:08.707874 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:11.209387 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:09.286151 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:09.307822 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:09.307892 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:09.369334 1039759 cri.go:89] found id: ""
	I0729 14:42:09.369363 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.369373 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:09.369381 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:09.369458 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:09.402302 1039759 cri.go:89] found id: ""
	I0729 14:42:09.402334 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.402345 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:09.402353 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:09.402423 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:09.436351 1039759 cri.go:89] found id: ""
	I0729 14:42:09.436380 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.436402 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:09.436429 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:09.436501 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:09.467735 1039759 cri.go:89] found id: ""
	I0729 14:42:09.467768 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.467780 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:09.467788 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:09.467849 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:09.503328 1039759 cri.go:89] found id: ""
	I0729 14:42:09.503355 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.503367 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:09.503376 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:09.503438 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:09.540012 1039759 cri.go:89] found id: ""
	I0729 14:42:09.540039 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.540047 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:09.540053 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:09.540106 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:09.576737 1039759 cri.go:89] found id: ""
	I0729 14:42:09.576801 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.576814 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:09.576822 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:09.576920 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:09.614624 1039759 cri.go:89] found id: ""
	I0729 14:42:09.614651 1039759 logs.go:276] 0 containers: []
	W0729 14:42:09.614659 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:09.614669 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:09.614684 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:09.650533 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:09.650580 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:09.709144 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:09.709175 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:09.724147 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:09.724173 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:09.790737 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:09.790760 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:09.790775 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:12.376968 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:12.390344 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:12.390409 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:12.424820 1039759 cri.go:89] found id: ""
	I0729 14:42:12.424849 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.424860 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:12.424876 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:12.424943 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:12.457444 1039759 cri.go:89] found id: ""
	I0729 14:42:12.457480 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.457492 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:12.457500 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:12.457561 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:12.490027 1039759 cri.go:89] found id: ""
	I0729 14:42:12.490058 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.490069 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:12.490077 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:12.490145 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:12.523229 1039759 cri.go:89] found id: ""
	I0729 14:42:12.523256 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.523265 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:12.523270 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:12.523321 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:12.557849 1039759 cri.go:89] found id: ""
	I0729 14:42:12.557875 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.557885 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:12.557891 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:12.557951 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:12.592943 1039759 cri.go:89] found id: ""
	I0729 14:42:12.592973 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.592982 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:12.592989 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:12.593059 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:12.626495 1039759 cri.go:89] found id: ""
	I0729 14:42:12.626531 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.626539 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:12.626557 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:12.626641 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:12.663764 1039759 cri.go:89] found id: ""
	I0729 14:42:12.663793 1039759 logs.go:276] 0 containers: []
	W0729 14:42:12.663805 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:12.663818 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:12.663835 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:12.722521 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:12.722556 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:12.736476 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:12.736505 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:12.809582 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:12.809617 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:12.809637 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:12.890665 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:12.890712 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:11.793144 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:13.793447 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:15.794630 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:12.673257 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:15.173702 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:13.707929 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:15.707964 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:15.429702 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:15.443258 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:15.443340 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:15.477170 1039759 cri.go:89] found id: ""
	I0729 14:42:15.477198 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.477207 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:15.477212 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:15.477266 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:15.511614 1039759 cri.go:89] found id: ""
	I0729 14:42:15.511652 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.511665 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:15.511671 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:15.511739 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:15.548472 1039759 cri.go:89] found id: ""
	I0729 14:42:15.548501 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.548511 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:15.548519 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:15.548590 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:15.589060 1039759 cri.go:89] found id: ""
	I0729 14:42:15.589090 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.589102 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:15.589110 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:15.589185 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:15.622846 1039759 cri.go:89] found id: ""
	I0729 14:42:15.622873 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.622882 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:15.622887 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:15.622943 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:15.656193 1039759 cri.go:89] found id: ""
	I0729 14:42:15.656220 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.656229 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:15.656237 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:15.656307 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:15.691301 1039759 cri.go:89] found id: ""
	I0729 14:42:15.691336 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.691348 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:15.691357 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:15.691420 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:15.729923 1039759 cri.go:89] found id: ""
	I0729 14:42:15.729963 1039759 logs.go:276] 0 containers: []
	W0729 14:42:15.729974 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:15.729988 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:15.730004 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:15.783531 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:15.783569 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:15.799590 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:15.799619 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:15.874849 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:15.874886 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:15.874901 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:15.957384 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:15.957424 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:18.497035 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:18.511538 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:18.511616 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:18.550512 1039759 cri.go:89] found id: ""
	I0729 14:42:18.550552 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.550573 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:18.550582 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:18.550642 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:18.585910 1039759 cri.go:89] found id: ""
	I0729 14:42:18.585942 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.585954 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:18.585962 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:18.586031 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:18.619680 1039759 cri.go:89] found id: ""
	I0729 14:42:18.619712 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.619722 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:18.619730 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:18.619799 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:18.651559 1039759 cri.go:89] found id: ""
	I0729 14:42:18.651592 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.651604 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:18.651613 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:18.651688 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:18.686668 1039759 cri.go:89] found id: ""
	I0729 14:42:18.686693 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.686701 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:18.686711 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:18.686764 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:18.722832 1039759 cri.go:89] found id: ""
	I0729 14:42:18.722859 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.722869 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:18.722876 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:18.722927 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:18.758261 1039759 cri.go:89] found id: ""
	I0729 14:42:18.758289 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.758302 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:18.758310 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:18.758378 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:18.795190 1039759 cri.go:89] found id: ""
	I0729 14:42:18.795216 1039759 logs.go:276] 0 containers: []
	W0729 14:42:18.795227 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:18.795237 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:18.795251 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:18.835331 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:18.835366 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:17.796916 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:20.294082 1039263 pod_ready.go:102] pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:17.673000 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:19.674010 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:18.209178 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:20.707421 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:18.889707 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:18.889745 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:18.902477 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:18.902503 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:18.970712 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:18.970735 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:18.970748 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:21.552092 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:21.566581 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:21.566669 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:21.600230 1039759 cri.go:89] found id: ""
	I0729 14:42:21.600261 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.600275 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:21.600283 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:21.600346 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:21.636576 1039759 cri.go:89] found id: ""
	I0729 14:42:21.636616 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.636627 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:21.636635 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:21.636705 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:21.672944 1039759 cri.go:89] found id: ""
	I0729 14:42:21.672973 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.672984 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:21.672997 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:21.673063 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:21.708555 1039759 cri.go:89] found id: ""
	I0729 14:42:21.708582 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.708601 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:21.708613 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:21.708673 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:21.744862 1039759 cri.go:89] found id: ""
	I0729 14:42:21.744891 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.744902 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:21.744908 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:21.744973 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:21.779084 1039759 cri.go:89] found id: ""
	I0729 14:42:21.779111 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.779119 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:21.779126 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:21.779183 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:21.819931 1039759 cri.go:89] found id: ""
	I0729 14:42:21.819972 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.819981 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:21.819989 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:21.820047 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:21.855472 1039759 cri.go:89] found id: ""
	I0729 14:42:21.855500 1039759 logs.go:276] 0 containers: []
	W0729 14:42:21.855509 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:21.855522 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:21.855539 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:21.925561 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:21.925579 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:21.925596 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:22.015986 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:22.016032 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:22.059898 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:22.059935 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:22.129018 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:22.129055 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:21.787886 1039263 pod_ready.go:81] duration metric: took 4m0.000465481s for pod "metrics-server-569cc877fc-5msnp" in "kube-system" namespace to be "Ready" ...
	E0729 14:42:21.787929 1039263 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0729 14:42:21.787945 1039263 pod_ready.go:38] duration metric: took 4m5.237036546s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:42:21.787973 1039263 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:42:21.788025 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:21.788089 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:21.857594 1039263 cri.go:89] found id: "0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:21.857613 1039263 cri.go:89] found id: ""
	I0729 14:42:21.857620 1039263 logs.go:276] 1 containers: [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8]
	I0729 14:42:21.857674 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:21.862462 1039263 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:21.862523 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:21.903562 1039263 cri.go:89] found id: "759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:21.903594 1039263 cri.go:89] found id: ""
	I0729 14:42:21.903604 1039263 logs.go:276] 1 containers: [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1]
	I0729 14:42:21.903660 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:21.908232 1039263 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:21.908327 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:21.947632 1039263 cri.go:89] found id: "cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:21.947663 1039263 cri.go:89] found id: ""
	I0729 14:42:21.947674 1039263 logs.go:276] 1 containers: [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d]
	I0729 14:42:21.947737 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:21.952576 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:21.952649 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:21.995318 1039263 cri.go:89] found id: "ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:21.995343 1039263 cri.go:89] found id: ""
	I0729 14:42:21.995351 1039263 logs.go:276] 1 containers: [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40]
	I0729 14:42:21.995418 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.000352 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:22.000440 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:22.040544 1039263 cri.go:89] found id: "1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:22.040572 1039263 cri.go:89] found id: ""
	I0729 14:42:22.040582 1039263 logs.go:276] 1 containers: [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b]
	I0729 14:42:22.040648 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.044840 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:22.044910 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:22.090787 1039263 cri.go:89] found id: "d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:22.090816 1039263 cri.go:89] found id: ""
	I0729 14:42:22.090827 1039263 logs.go:276] 1 containers: [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322]
	I0729 14:42:22.090897 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.096748 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:22.096826 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:22.143491 1039263 cri.go:89] found id: ""
	I0729 14:42:22.143522 1039263 logs.go:276] 0 containers: []
	W0729 14:42:22.143534 1039263 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:22.143541 1039263 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 14:42:22.143609 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 14:42:22.179378 1039263 cri.go:89] found id: "bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:22.179404 1039263 cri.go:89] found id: "40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:22.179409 1039263 cri.go:89] found id: ""
	I0729 14:42:22.179419 1039263 logs.go:276] 2 containers: [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4]
	I0729 14:42:22.179482 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.184686 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:22.189009 1039263 logs.go:123] Gathering logs for etcd [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1] ...
	I0729 14:42:22.189029 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:22.250475 1039263 logs.go:123] Gathering logs for kube-scheduler [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40] ...
	I0729 14:42:22.250510 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:22.286581 1039263 logs.go:123] Gathering logs for kube-proxy [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b] ...
	I0729 14:42:22.286622 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:22.325541 1039263 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:22.325570 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:22.831822 1039263 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:22.831875 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:22.846540 1039263 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:22.846588 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 14:42:22.970758 1039263 logs.go:123] Gathering logs for coredns [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d] ...
	I0729 14:42:22.970796 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:23.013428 1039263 logs.go:123] Gathering logs for kube-controller-manager [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322] ...
	I0729 14:42:23.013467 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:23.064784 1039263 logs.go:123] Gathering logs for storage-provisioner [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a] ...
	I0729 14:42:23.064820 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:23.111615 1039263 logs.go:123] Gathering logs for storage-provisioner [40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4] ...
	I0729 14:42:23.111653 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:23.151296 1039263 logs.go:123] Gathering logs for container status ...
	I0729 14:42:23.151328 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:23.198650 1039263 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:23.198692 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:23.259196 1039263 logs.go:123] Gathering logs for kube-apiserver [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8] ...
	I0729 14:42:23.259247 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:25.808980 1039263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:25.829180 1039263 api_server.go:72] duration metric: took 4m16.997740137s to wait for apiserver process to appear ...
	I0729 14:42:25.829211 1039263 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:42:25.829260 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:25.829335 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:25.875138 1039263 cri.go:89] found id: "0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:25.875167 1039263 cri.go:89] found id: ""
	I0729 14:42:25.875175 1039263 logs.go:276] 1 containers: [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8]
	I0729 14:42:25.875230 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:25.879855 1039263 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:25.879937 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:25.916938 1039263 cri.go:89] found id: "759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:25.916964 1039263 cri.go:89] found id: ""
	I0729 14:42:25.916974 1039263 logs.go:276] 1 containers: [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1]
	I0729 14:42:25.917036 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:25.921166 1039263 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:25.921224 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:25.958196 1039263 cri.go:89] found id: "cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:25.958224 1039263 cri.go:89] found id: ""
	I0729 14:42:25.958234 1039263 logs.go:276] 1 containers: [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d]
	I0729 14:42:25.958300 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:25.962697 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:25.962760 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:26.000162 1039263 cri.go:89] found id: "ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:26.000195 1039263 cri.go:89] found id: ""
	I0729 14:42:26.000206 1039263 logs.go:276] 1 containers: [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40]
	I0729 14:42:26.000277 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.004518 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:26.004594 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:26.041099 1039263 cri.go:89] found id: "1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:26.041133 1039263 cri.go:89] found id: ""
	I0729 14:42:26.041144 1039263 logs.go:276] 1 containers: [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b]
	I0729 14:42:26.041208 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.045334 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:26.045412 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:26.082783 1039263 cri.go:89] found id: "d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:26.082815 1039263 cri.go:89] found id: ""
	I0729 14:42:26.082826 1039263 logs.go:276] 1 containers: [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322]
	I0729 14:42:26.082901 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.086996 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:26.087063 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:26.123636 1039263 cri.go:89] found id: ""
	I0729 14:42:26.123677 1039263 logs.go:276] 0 containers: []
	W0729 14:42:26.123688 1039263 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:26.123694 1039263 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 14:42:26.123756 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 14:42:26.163819 1039263 cri.go:89] found id: "bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:26.163849 1039263 cri.go:89] found id: "40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:26.163855 1039263 cri.go:89] found id: ""
	I0729 14:42:26.163864 1039263 logs.go:276] 2 containers: [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4]
	I0729 14:42:26.163929 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.168611 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:26.173125 1039263 logs.go:123] Gathering logs for kube-scheduler [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40] ...
	I0729 14:42:26.173155 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:22.173593 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:24.173621 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:22.708101 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:25.206661 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:27.207926 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:24.645474 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:24.658107 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:24.658171 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:24.696604 1039759 cri.go:89] found id: ""
	I0729 14:42:24.696635 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.696645 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:24.696653 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:24.696725 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:24.733862 1039759 cri.go:89] found id: ""
	I0729 14:42:24.733887 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.733894 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:24.733901 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:24.733957 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:24.770614 1039759 cri.go:89] found id: ""
	I0729 14:42:24.770644 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.770656 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:24.770664 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:24.770734 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:24.806368 1039759 cri.go:89] found id: ""
	I0729 14:42:24.806394 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.806403 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:24.806408 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:24.806470 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:24.838490 1039759 cri.go:89] found id: ""
	I0729 14:42:24.838526 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.838534 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:24.838541 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:24.838596 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:24.871017 1039759 cri.go:89] found id: ""
	I0729 14:42:24.871043 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.871051 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:24.871057 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:24.871128 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:24.903281 1039759 cri.go:89] found id: ""
	I0729 14:42:24.903311 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.903322 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:24.903330 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:24.903403 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:24.937245 1039759 cri.go:89] found id: ""
	I0729 14:42:24.937279 1039759 logs.go:276] 0 containers: []
	W0729 14:42:24.937291 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:24.937304 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:24.937319 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:24.989518 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:24.989551 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:25.005021 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:25.005055 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:25.080849 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:25.080877 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:25.080893 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:25.163742 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:25.163784 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:27.706182 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:27.719350 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:27.719425 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:27.756955 1039759 cri.go:89] found id: ""
	I0729 14:42:27.756982 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.756990 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:27.756997 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:27.757054 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:27.791975 1039759 cri.go:89] found id: ""
	I0729 14:42:27.792014 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.792025 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:27.792033 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:27.792095 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:27.834188 1039759 cri.go:89] found id: ""
	I0729 14:42:27.834215 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.834223 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:27.834230 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:27.834296 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:27.867798 1039759 cri.go:89] found id: ""
	I0729 14:42:27.867834 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.867843 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:27.867851 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:27.867918 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:27.900316 1039759 cri.go:89] found id: ""
	I0729 14:42:27.900343 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.900351 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:27.900357 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:27.900422 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:27.932361 1039759 cri.go:89] found id: ""
	I0729 14:42:27.932391 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.932402 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:27.932425 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:27.932493 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:27.965530 1039759 cri.go:89] found id: ""
	I0729 14:42:27.965562 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.965573 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:27.965581 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:27.965651 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:27.999582 1039759 cri.go:89] found id: ""
	I0729 14:42:27.999608 1039759 logs.go:276] 0 containers: []
	W0729 14:42:27.999617 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:27.999626 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:27.999654 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:28.069415 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:28.069438 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:28.069454 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:28.149781 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:28.149821 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:28.190045 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:28.190072 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:28.244147 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:28.244188 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:26.217755 1039263 logs.go:123] Gathering logs for storage-provisioner [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a] ...
	I0729 14:42:26.217796 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:26.257363 1039263 logs.go:123] Gathering logs for storage-provisioner [40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4] ...
	I0729 14:42:26.257399 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:26.297502 1039263 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:26.297534 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:26.729336 1039263 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:26.729370 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:26.779172 1039263 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:26.779213 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:26.794369 1039263 logs.go:123] Gathering logs for etcd [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1] ...
	I0729 14:42:26.794399 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:26.857964 1039263 logs.go:123] Gathering logs for coredns [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d] ...
	I0729 14:42:26.858000 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:26.895052 1039263 logs.go:123] Gathering logs for container status ...
	I0729 14:42:26.895083 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:26.936360 1039263 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:26.936395 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 14:42:27.037118 1039263 logs.go:123] Gathering logs for kube-apiserver [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8] ...
	I0729 14:42:27.037160 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:27.089764 1039263 logs.go:123] Gathering logs for kube-proxy [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b] ...
	I0729 14:42:27.089798 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:27.134009 1039263 logs.go:123] Gathering logs for kube-controller-manager [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322] ...
	I0729 14:42:27.134042 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:29.690960 1039263 api_server.go:253] Checking apiserver healthz at https://192.168.50.53:8443/healthz ...
	I0729 14:42:29.696457 1039263 api_server.go:279] https://192.168.50.53:8443/healthz returned 200:
	ok
	I0729 14:42:29.697313 1039263 api_server.go:141] control plane version: v1.30.3
	I0729 14:42:29.697335 1039263 api_server.go:131] duration metric: took 3.868117139s to wait for apiserver health ...
	I0729 14:42:29.697343 1039263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:42:29.697370 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:29.697430 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:29.740594 1039263 cri.go:89] found id: "0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:29.740623 1039263 cri.go:89] found id: ""
	I0729 14:42:29.740633 1039263 logs.go:276] 1 containers: [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8]
	I0729 14:42:29.740696 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.745183 1039263 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:29.745257 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:29.780091 1039263 cri.go:89] found id: "759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:29.780112 1039263 cri.go:89] found id: ""
	I0729 14:42:29.780119 1039263 logs.go:276] 1 containers: [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1]
	I0729 14:42:29.780178 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.784241 1039263 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:29.784305 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:29.825618 1039263 cri.go:89] found id: "cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:29.825641 1039263 cri.go:89] found id: ""
	I0729 14:42:29.825649 1039263 logs.go:276] 1 containers: [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d]
	I0729 14:42:29.825715 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.830291 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:29.830351 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:29.866651 1039263 cri.go:89] found id: "ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:29.866685 1039263 cri.go:89] found id: ""
	I0729 14:42:29.866695 1039263 logs.go:276] 1 containers: [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40]
	I0729 14:42:29.866758 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.871440 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:29.871494 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:29.911944 1039263 cri.go:89] found id: "1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:29.911968 1039263 cri.go:89] found id: ""
	I0729 14:42:29.911976 1039263 logs.go:276] 1 containers: [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b]
	I0729 14:42:29.912037 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.916604 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:29.916680 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:29.954334 1039263 cri.go:89] found id: "d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:29.954361 1039263 cri.go:89] found id: ""
	I0729 14:42:29.954371 1039263 logs.go:276] 1 containers: [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322]
	I0729 14:42:29.954446 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:29.959051 1039263 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:29.959130 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:29.996760 1039263 cri.go:89] found id: ""
	I0729 14:42:29.996795 1039263 logs.go:276] 0 containers: []
	W0729 14:42:29.996804 1039263 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:29.996812 1039263 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 14:42:29.996883 1039263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 14:42:30.034562 1039263 cri.go:89] found id: "bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:30.034598 1039263 cri.go:89] found id: "40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:30.034604 1039263 cri.go:89] found id: ""
	I0729 14:42:30.034614 1039263 logs.go:276] 2 containers: [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4]
	I0729 14:42:30.034682 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:30.039588 1039263 ssh_runner.go:195] Run: which crictl
	I0729 14:42:30.043866 1039263 logs.go:123] Gathering logs for kube-apiserver [0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8] ...
	I0729 14:42:30.043889 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e342f5e4bb066164663fdda23af9832a9aa66696acce9b9c1b90a02566539a8"
	I0729 14:42:30.091309 1039263 logs.go:123] Gathering logs for etcd [759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1] ...
	I0729 14:42:30.091349 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 759428588e36e6da754a3085381760e4a44cdac8503782124c51391babe584f1"
	I0729 14:42:30.149888 1039263 logs.go:123] Gathering logs for kube-scheduler [ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40] ...
	I0729 14:42:30.149926 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed34fb84b9098b28ae79a80b72cd1a1fc808d958df51fddce2ea7af9bd5d8d40"
	I0729 14:42:30.189441 1039263 logs.go:123] Gathering logs for kube-controller-manager [d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322] ...
	I0729 14:42:30.189479 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2573d61839fba531aaa230607bb4beee4b1951607ce0e89234becf19cd40322"
	I0729 14:42:30.250850 1039263 logs.go:123] Gathering logs for storage-provisioner [bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a] ...
	I0729 14:42:30.250890 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb9e633119b916d021b1f6454aa58ad68b62ddeb681317973ea2a558f929a23a"
	I0729 14:42:30.290077 1039263 logs.go:123] Gathering logs for storage-provisioner [40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4] ...
	I0729 14:42:30.290111 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40292615dffc710488b8093d726af7e74ef4d8176e0ab5deab41a789c2e538d4"
	I0729 14:42:30.329035 1039263 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:30.329068 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:30.383068 1039263 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:30.383113 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 14:42:30.497009 1039263 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:30.497045 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:30.914489 1039263 logs.go:123] Gathering logs for container status ...
	I0729 14:42:30.914534 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:30.972901 1039263 logs.go:123] Gathering logs for kube-proxy [1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b] ...
	I0729 14:42:30.972951 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a12022d9b8d8ad53ac9938351dd5a8c349f0caab4bd5a5eb3c31345fcac0b0b"
	I0729 14:42:31.021798 1039263 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:31.021838 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:31.040147 1039263 logs.go:123] Gathering logs for coredns [cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d] ...
	I0729 14:42:31.040182 1039263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce96789d197cea05c059a680502d7dad130d83c20a246ab7e3a95c2cbe3940d"
	I0729 14:42:26.674294 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:29.173375 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:31.173588 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:29.710051 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:32.209382 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:33.593681 1039263 system_pods.go:59] 8 kube-system pods found
	I0729 14:42:33.593711 1039263 system_pods.go:61] "coredns-7db6d8ff4d-6dhzz" [c680e565-fe93-4072-8fe8-6fd440ae5675] Running
	I0729 14:42:33.593716 1039263 system_pods.go:61] "etcd-embed-certs-668123" [3244d6a8-3aa2-406a-86fe-9770f5b8541a] Running
	I0729 14:42:33.593719 1039263 system_pods.go:61] "kube-apiserver-embed-certs-668123" [a00570e4-b496-4083-b280-4125643e475e] Running
	I0729 14:42:33.593723 1039263 system_pods.go:61] "kube-controller-manager-embed-certs-668123" [cec685e1-4d5f-4210-a115-e3766c962f07] Running
	I0729 14:42:33.593725 1039263 system_pods.go:61] "kube-proxy-2v79q" [e43e850d-b94e-467c-bf0f-0eac3828f54f] Running
	I0729 14:42:33.593728 1039263 system_pods.go:61] "kube-scheduler-embed-certs-668123" [4037d948-faed-49c9-b321-6a4be51b9ea9] Running
	I0729 14:42:33.593733 1039263 system_pods.go:61] "metrics-server-569cc877fc-5msnp" [eb9cd6f7-caf5-4b18-b0d6-0f01add839ce] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:42:33.593736 1039263 system_pods.go:61] "storage-provisioner" [ecdab0df-406c-4f3c-b8fe-34a48b7f1e0a] Running
	I0729 14:42:33.593744 1039263 system_pods.go:74] duration metric: took 3.896394577s to wait for pod list to return data ...
	I0729 14:42:33.593751 1039263 default_sa.go:34] waiting for default service account to be created ...
	I0729 14:42:33.596176 1039263 default_sa.go:45] found service account: "default"
	I0729 14:42:33.596197 1039263 default_sa.go:55] duration metric: took 2.440561ms for default service account to be created ...
	I0729 14:42:33.596205 1039263 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 14:42:33.601830 1039263 system_pods.go:86] 8 kube-system pods found
	I0729 14:42:33.601855 1039263 system_pods.go:89] "coredns-7db6d8ff4d-6dhzz" [c680e565-fe93-4072-8fe8-6fd440ae5675] Running
	I0729 14:42:33.601861 1039263 system_pods.go:89] "etcd-embed-certs-668123" [3244d6a8-3aa2-406a-86fe-9770f5b8541a] Running
	I0729 14:42:33.601866 1039263 system_pods.go:89] "kube-apiserver-embed-certs-668123" [a00570e4-b496-4083-b280-4125643e475e] Running
	I0729 14:42:33.601871 1039263 system_pods.go:89] "kube-controller-manager-embed-certs-668123" [cec685e1-4d5f-4210-a115-e3766c962f07] Running
	I0729 14:42:33.601878 1039263 system_pods.go:89] "kube-proxy-2v79q" [e43e850d-b94e-467c-bf0f-0eac3828f54f] Running
	I0729 14:42:33.601887 1039263 system_pods.go:89] "kube-scheduler-embed-certs-668123" [4037d948-faed-49c9-b321-6a4be51b9ea9] Running
	I0729 14:42:33.601897 1039263 system_pods.go:89] "metrics-server-569cc877fc-5msnp" [eb9cd6f7-caf5-4b18-b0d6-0f01add839ce] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:42:33.601908 1039263 system_pods.go:89] "storage-provisioner" [ecdab0df-406c-4f3c-b8fe-34a48b7f1e0a] Running
	I0729 14:42:33.601921 1039263 system_pods.go:126] duration metric: took 5.70985ms to wait for k8s-apps to be running ...
	I0729 14:42:33.601934 1039263 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 14:42:33.601994 1039263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:42:33.620869 1039263 system_svc.go:56] duration metric: took 18.921974ms WaitForService to wait for kubelet
	I0729 14:42:33.620907 1039263 kubeadm.go:582] duration metric: took 4m24.7894747s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:42:33.620939 1039263 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:42:33.623517 1039263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:42:33.623538 1039263 node_conditions.go:123] node cpu capacity is 2
	I0729 14:42:33.623562 1039263 node_conditions.go:105] duration metric: took 2.617272ms to run NodePressure ...
	I0729 14:42:33.623582 1039263 start.go:241] waiting for startup goroutines ...
	I0729 14:42:33.623591 1039263 start.go:246] waiting for cluster config update ...
	I0729 14:42:33.623601 1039263 start.go:255] writing updated cluster config ...
	I0729 14:42:33.623897 1039263 ssh_runner.go:195] Run: rm -f paused
	I0729 14:42:33.677961 1039263 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 14:42:33.679952 1039263 out.go:177] * Done! kubectl is now configured to use "embed-certs-668123" cluster and "default" namespace by default
	I0729 14:42:30.758335 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:30.771788 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:30.771860 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:30.807608 1039759 cri.go:89] found id: ""
	I0729 14:42:30.807633 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.807641 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:30.807647 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:30.807709 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:30.842361 1039759 cri.go:89] found id: ""
	I0729 14:42:30.842389 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.842397 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:30.842404 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:30.842474 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:30.879123 1039759 cri.go:89] found id: ""
	I0729 14:42:30.879149 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.879157 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:30.879162 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:30.879228 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:30.913042 1039759 cri.go:89] found id: ""
	I0729 14:42:30.913072 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.913084 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:30.913092 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:30.913162 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:30.949867 1039759 cri.go:89] found id: ""
	I0729 14:42:30.949900 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.949910 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:30.949919 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:30.949988 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:30.997468 1039759 cri.go:89] found id: ""
	I0729 14:42:30.997497 1039759 logs.go:276] 0 containers: []
	W0729 14:42:30.997509 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:30.997516 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:30.997606 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:31.039611 1039759 cri.go:89] found id: ""
	I0729 14:42:31.039643 1039759 logs.go:276] 0 containers: []
	W0729 14:42:31.039654 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:31.039662 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:31.039730 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:31.085802 1039759 cri.go:89] found id: ""
	I0729 14:42:31.085839 1039759 logs.go:276] 0 containers: []
	W0729 14:42:31.085851 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:31.085862 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:31.085890 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:31.155919 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:31.155941 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:31.155954 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:31.232795 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:31.232833 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:31.270647 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:31.270682 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:31.324648 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:31.324685 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:33.839801 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:33.853358 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:33.853417 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:33.674345 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:36.174468 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:34.707752 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:37.209918 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:33.889294 1039759 cri.go:89] found id: ""
	I0729 14:42:33.889323 1039759 logs.go:276] 0 containers: []
	W0729 14:42:33.889334 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:33.889342 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:33.889413 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:33.930106 1039759 cri.go:89] found id: ""
	I0729 14:42:33.930130 1039759 logs.go:276] 0 containers: []
	W0729 14:42:33.930142 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:33.930149 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:33.930211 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:33.973607 1039759 cri.go:89] found id: ""
	I0729 14:42:33.973634 1039759 logs.go:276] 0 containers: []
	W0729 14:42:33.973646 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:33.973654 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:33.973715 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:34.010103 1039759 cri.go:89] found id: ""
	I0729 14:42:34.010133 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.010142 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:34.010149 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:34.010209 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:34.044050 1039759 cri.go:89] found id: ""
	I0729 14:42:34.044080 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.044092 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:34.044099 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:34.044174 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:34.081222 1039759 cri.go:89] found id: ""
	I0729 14:42:34.081250 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.081260 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:34.081268 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:34.081360 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:34.115837 1039759 cri.go:89] found id: ""
	I0729 14:42:34.115878 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.115891 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:34.115899 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:34.115973 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:34.151086 1039759 cri.go:89] found id: ""
	I0729 14:42:34.151116 1039759 logs.go:276] 0 containers: []
	W0729 14:42:34.151126 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:34.151139 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:34.151156 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:34.164058 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:34.164087 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:34.238481 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:34.238503 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:34.238518 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:34.316236 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:34.316279 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:34.356281 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:34.356316 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:36.910374 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:36.924907 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:36.925008 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:36.960508 1039759 cri.go:89] found id: ""
	I0729 14:42:36.960535 1039759 logs.go:276] 0 containers: []
	W0729 14:42:36.960543 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:36.960550 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:36.960631 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:36.999840 1039759 cri.go:89] found id: ""
	I0729 14:42:36.999869 1039759 logs.go:276] 0 containers: []
	W0729 14:42:36.999881 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:36.999889 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:36.999960 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:37.032801 1039759 cri.go:89] found id: ""
	I0729 14:42:37.032832 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.032840 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:37.032847 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:37.032907 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:37.066359 1039759 cri.go:89] found id: ""
	I0729 14:42:37.066386 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.066394 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:37.066401 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:37.066454 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:37.103816 1039759 cri.go:89] found id: ""
	I0729 14:42:37.103844 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.103852 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:37.103859 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:37.103922 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:37.137135 1039759 cri.go:89] found id: ""
	I0729 14:42:37.137175 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.137186 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:37.137194 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:37.137267 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:37.170819 1039759 cri.go:89] found id: ""
	I0729 14:42:37.170851 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.170863 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:37.170871 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:37.170941 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:37.206427 1039759 cri.go:89] found id: ""
	I0729 14:42:37.206456 1039759 logs.go:276] 0 containers: []
	W0729 14:42:37.206467 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:37.206478 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:37.206492 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:37.287119 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:37.287160 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:37.331090 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:37.331119 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:37.392147 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:37.392189 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:37.406017 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:37.406047 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:37.471644 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:38.673603 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:40.674214 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:39.706915 1039440 pod_ready.go:102] pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:41.201453 1039440 pod_ready.go:81] duration metric: took 4m0.000454399s for pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace to be "Ready" ...
	E0729 14:42:41.201488 1039440 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-gmz64" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 14:42:41.201514 1039440 pod_ready.go:38] duration metric: took 4m13.052610312s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:42:41.201553 1039440 kubeadm.go:597] duration metric: took 4m22.712976139s to restartPrimaryControlPlane
	W0729 14:42:41.201639 1039440 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 14:42:41.201696 1039440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:42:39.972835 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:39.985878 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:42:39.985945 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:42:40.020312 1039759 cri.go:89] found id: ""
	I0729 14:42:40.020349 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.020360 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:42:40.020368 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:42:40.020456 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:42:40.055688 1039759 cri.go:89] found id: ""
	I0729 14:42:40.055721 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.055732 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:42:40.055740 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:42:40.055799 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:42:40.090432 1039759 cri.go:89] found id: ""
	I0729 14:42:40.090463 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.090472 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:42:40.090478 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:42:40.090549 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:42:40.127794 1039759 cri.go:89] found id: ""
	I0729 14:42:40.127823 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.127832 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:42:40.127838 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:42:40.127894 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:42:40.162911 1039759 cri.go:89] found id: ""
	I0729 14:42:40.162944 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.162953 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:42:40.162959 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:42:40.163020 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:42:40.201578 1039759 cri.go:89] found id: ""
	I0729 14:42:40.201608 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.201619 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:42:40.201625 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:42:40.201684 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:42:40.247314 1039759 cri.go:89] found id: ""
	I0729 14:42:40.247340 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.247348 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:42:40.247363 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:42:40.247436 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:42:40.285393 1039759 cri.go:89] found id: ""
	I0729 14:42:40.285422 1039759 logs.go:276] 0 containers: []
	W0729 14:42:40.285431 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:42:40.285440 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:42:40.285458 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:42:40.299901 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:42:40.299933 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:42:40.372774 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:42:40.372802 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:42:40.372821 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:42:40.454392 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:42:40.454447 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 14:42:40.494641 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:42:40.494671 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:42:43.046060 1039759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:42:43.058790 1039759 kubeadm.go:597] duration metric: took 4m3.37086398s to restartPrimaryControlPlane
	W0729 14:42:43.058888 1039759 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 14:42:43.058920 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:42:43.544647 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:42:43.560304 1039759 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:42:43.570229 1039759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:42:43.579922 1039759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:42:43.579946 1039759 kubeadm.go:157] found existing configuration files:
	
	I0729 14:42:43.580004 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:42:43.589520 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:42:43.589591 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:42:43.600286 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:42:43.611565 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:42:43.611629 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:42:43.623432 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:42:43.633289 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:42:43.633338 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:42:43.643410 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:42:43.653723 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:42:43.653816 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:42:43.663840 1039759 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:42:43.735243 1039759 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 14:42:43.735314 1039759 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:42:43.904148 1039759 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:42:43.904310 1039759 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:42:43.904480 1039759 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 14:42:44.101401 1039759 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:42:44.103392 1039759 out.go:204]   - Generating certificates and keys ...
	I0729 14:42:44.103499 1039759 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:42:44.103580 1039759 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:42:44.103693 1039759 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:42:44.103829 1039759 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:42:44.103944 1039759 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:42:44.104054 1039759 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:42:44.104146 1039759 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:42:44.104360 1039759 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:42:44.104599 1039759 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:42:44.105264 1039759 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:42:44.105363 1039759 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:42:44.105461 1039759 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:42:44.426107 1039759 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:42:44.593004 1039759 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:42:44.845387 1039759 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:42:44.934634 1039759 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:42:44.959808 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:42:44.961918 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:42:44.961990 1039759 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:42:45.117986 1039759 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:42:42.678218 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:45.175453 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:45.119775 1039759 out.go:204]   - Booting up control plane ...
	I0729 14:42:45.119913 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:42:45.121333 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:42:45.123001 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:42:45.123783 1039759 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:42:45.126031 1039759 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 14:42:47.673678 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:49.674212 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:52.173086 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:54.173797 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:56.178948 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:42:58.674432 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:00.675207 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:03.173621 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:05.175460 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:07.674421 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:09.674478 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:12.882329 1039440 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.680602745s)
	I0729 14:43:12.882426 1039440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:43:12.900267 1039440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:43:12.910750 1039440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:43:12.921172 1039440 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:43:12.921194 1039440 kubeadm.go:157] found existing configuration files:
	
	I0729 14:43:12.921244 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 14:43:12.931186 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:43:12.931243 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:43:12.940800 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 14:43:12.949875 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:43:12.949929 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:43:12.959555 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 14:43:12.968817 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:43:12.968871 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:43:12.978560 1039440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 14:43:12.987657 1039440 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:43:12.987700 1039440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:43:12.997142 1039440 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:43:13.057245 1039440 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 14:43:13.057405 1039440 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:43:13.205227 1039440 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:43:13.205381 1039440 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:43:13.205541 1039440 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 14:43:13.404885 1039440 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:43:13.407054 1039440 out.go:204]   - Generating certificates and keys ...
	I0729 14:43:13.407148 1039440 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:43:13.407232 1039440 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:43:13.407329 1039440 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:43:13.407411 1039440 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:43:13.407509 1039440 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:43:13.407598 1039440 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:43:13.407688 1039440 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:43:13.407774 1039440 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:43:13.407889 1039440 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:43:13.408006 1039440 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:43:13.408071 1039440 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:43:13.408177 1039440 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:43:13.563569 1039440 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:43:14.001138 1039440 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 14:43:14.091368 1039440 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:43:14.238732 1039440 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:43:14.344460 1039440 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:43:14.346386 1039440 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:43:14.349309 1039440 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:43:12.174022 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:14.673166 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:14.351183 1039440 out.go:204]   - Booting up control plane ...
	I0729 14:43:14.351293 1039440 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:43:14.351374 1039440 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:43:14.351671 1039440 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:43:14.375878 1039440 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:43:14.377114 1039440 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:43:14.377198 1039440 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:43:14.528561 1039440 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 14:43:14.528665 1039440 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 14:43:15.030447 1039440 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.044001ms
	I0729 14:43:15.030591 1039440 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 14:43:20.033357 1039440 kubeadm.go:310] [api-check] The API server is healthy after 5.002708747s
	I0729 14:43:20.055871 1039440 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 14:43:20.069020 1039440 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 14:43:20.108465 1039440 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 14:43:20.108664 1039440 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-751306 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 14:43:20.124596 1039440 kubeadm.go:310] [bootstrap-token] Using token: vqqt7g.hayxn6bly3sjo08s
	I0729 14:43:20.125995 1039440 out.go:204]   - Configuring RBAC rules ...
	I0729 14:43:20.126124 1039440 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 14:43:20.138826 1039440 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 14:43:20.145976 1039440 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 14:43:20.149166 1039440 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 14:43:20.152875 1039440 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 14:43:20.156268 1039440 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 14:43:20.446117 1039440 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 14:43:20.900251 1039440 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 14:43:21.446105 1039440 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 14:43:21.446920 1039440 kubeadm.go:310] 
	I0729 14:43:21.446984 1039440 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 14:43:21.446992 1039440 kubeadm.go:310] 
	I0729 14:43:21.447057 1039440 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 14:43:21.447063 1039440 kubeadm.go:310] 
	I0729 14:43:21.447084 1039440 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 14:43:21.447133 1039440 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 14:43:21.447176 1039440 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 14:43:21.447182 1039440 kubeadm.go:310] 
	I0729 14:43:21.447233 1039440 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 14:43:21.447242 1039440 kubeadm.go:310] 
	I0729 14:43:21.447310 1039440 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 14:43:21.447334 1039440 kubeadm.go:310] 
	I0729 14:43:21.447408 1039440 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 14:43:21.447515 1039440 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 14:43:21.447574 1039440 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 14:43:21.447582 1039440 kubeadm.go:310] 
	I0729 14:43:21.447652 1039440 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 14:43:21.447722 1039440 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 14:43:21.447728 1039440 kubeadm.go:310] 
	I0729 14:43:21.447799 1039440 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token vqqt7g.hayxn6bly3sjo08s \
	I0729 14:43:21.447903 1039440 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 \
	I0729 14:43:21.447931 1039440 kubeadm.go:310] 	--control-plane 
	I0729 14:43:21.447935 1039440 kubeadm.go:310] 
	I0729 14:43:21.448017 1039440 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 14:43:21.448025 1039440 kubeadm.go:310] 
	I0729 14:43:21.448115 1039440 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token vqqt7g.hayxn6bly3sjo08s \
	I0729 14:43:21.448239 1039440 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 
	I0729 14:43:21.449071 1039440 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:43:21.449117 1039440 cni.go:84] Creating CNI manager for ""
	I0729 14:43:21.449134 1039440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:43:21.450744 1039440 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:43:16.674887 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:19.175478 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:21.452012 1039440 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:43:21.464232 1039440 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:43:21.486786 1039440 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 14:43:21.486890 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:21.486887 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-751306 minikube.k8s.io/updated_at=2024_07_29T14_43_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411 minikube.k8s.io/name=default-k8s-diff-port-751306 minikube.k8s.io/primary=true
	I0729 14:43:21.689413 1039440 ops.go:34] apiserver oom_adj: -16
	I0729 14:43:21.697342 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:22.198351 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:21.673361 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:23.674189 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:26.173782 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:22.698043 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:23.198259 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:23.697640 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:24.198325 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:24.697702 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:25.198216 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:25.697625 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:26.197978 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:26.698039 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:27.197794 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:25.126835 1039759 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 14:43:25.127033 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:43:25.127306 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:43:28.174036 1038758 pod_ready.go:102] pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace has status "Ready":"False"
	I0729 14:43:29.667306 1038758 pod_ready.go:81] duration metric: took 4m0.000473541s for pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace to be "Ready" ...
	E0729 14:43:29.667341 1038758 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-59sbc" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 14:43:29.667369 1038758 pod_ready.go:38] duration metric: took 4m13.916299366s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:43:29.667407 1038758 kubeadm.go:597] duration metric: took 4m21.57875039s to restartPrimaryControlPlane
	W0729 14:43:29.667481 1038758 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 14:43:29.667513 1038758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:43:27.698036 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:28.197941 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:28.697839 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:29.197525 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:29.698141 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:30.197670 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:30.697615 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:31.197999 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:31.697648 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:32.197647 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:30.127504 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:43:30.127777 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:43:32.697837 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:33.197692 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:33.697431 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:34.198048 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:34.698439 1039440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:43:34.802320 1039440 kubeadm.go:1113] duration metric: took 13.31552277s to wait for elevateKubeSystemPrivileges
	I0729 14:43:34.802367 1039440 kubeadm.go:394] duration metric: took 5m16.369033556s to StartCluster
	I0729 14:43:34.802391 1039440 settings.go:142] acquiring lock: {Name:mke61e73d7bb1a5bd9c2f4c9e9bba0a07b199ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:43:34.802488 1039440 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:43:34.804740 1039440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:43:34.805049 1039440 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.233 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:43:34.805148 1039440 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 14:43:34.805251 1039440 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-751306"
	I0729 14:43:34.805262 1039440 config.go:182] Loaded profile config "default-k8s-diff-port-751306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:43:34.805269 1039440 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-751306"
	I0729 14:43:34.805313 1039440 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-751306"
	I0729 14:43:34.805294 1039440 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-751306"
	W0729 14:43:34.805341 1039440 addons.go:243] addon storage-provisioner should already be in state true
	I0729 14:43:34.805358 1039440 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-751306"
	W0729 14:43:34.805369 1039440 addons.go:243] addon metrics-server should already be in state true
	I0729 14:43:34.805396 1039440 host.go:66] Checking if "default-k8s-diff-port-751306" exists ...
	I0729 14:43:34.805325 1039440 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-751306"
	I0729 14:43:34.805396 1039440 host.go:66] Checking if "default-k8s-diff-port-751306" exists ...
	I0729 14:43:34.805838 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.805869 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.805904 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.805928 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.805968 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.806026 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.806625 1039440 out.go:177] * Verifying Kubernetes components...
	I0729 14:43:34.807999 1039440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:43:34.823091 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39133
	I0729 14:43:34.823103 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35809
	I0729 14:43:34.823532 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.823556 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.824084 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.824111 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.824372 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.824399 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.824427 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.824891 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.825049 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38325
	I0729 14:43:34.825140 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.825191 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.825210 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:43:34.825415 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.825927 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.825945 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.826314 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.826903 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.826939 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.829361 1039440 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-751306"
	W0729 14:43:34.829386 1039440 addons.go:243] addon default-storageclass should already be in state true
	I0729 14:43:34.829417 1039440 host.go:66] Checking if "default-k8s-diff-port-751306" exists ...
	I0729 14:43:34.829785 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.829832 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.841752 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44091
	I0729 14:43:34.842232 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.842938 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.842965 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.843370 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38151
	I0729 14:43:34.843397 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.843713 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:43:34.843818 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.844223 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.844247 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.844615 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.844805 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:43:34.846424 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:43:34.846619 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:43:34.848531 1039440 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 14:43:34.848918 1039440 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:43:34.849006 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35785
	I0729 14:43:34.849421 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.849852 1039440 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 14:43:34.849870 1039440 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 14:43:34.849888 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:43:34.850037 1039440 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:43:34.850053 1039440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 14:43:34.850069 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:43:34.850233 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.850251 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.850659 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.851665 1039440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:43:34.851781 1039440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:43:34.853937 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.854441 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.854518 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:43:34.854540 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.854589 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:43:34.854779 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:43:34.855035 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:43:34.855098 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:43:34.855114 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.855169 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:43:34.855465 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:43:34.855658 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:43:34.855828 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:43:34.856191 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:43:34.869648 1039440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38917
	I0729 14:43:34.870131 1039440 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:43:34.870600 1039440 main.go:141] libmachine: Using API Version  1
	I0729 14:43:34.870618 1039440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:43:34.871134 1039440 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:43:34.871334 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetState
	I0729 14:43:34.873088 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .DriverName
	I0729 14:43:34.873340 1039440 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 14:43:34.873353 1039440 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 14:43:34.873369 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHHostname
	I0729 14:43:34.876289 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.876751 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:b9:23", ip: ""} in network mk-default-k8s-diff-port-751306: {Iface:virbr4 ExpiryTime:2024-07-29 15:38:04 +0000 UTC Type:0 Mac:52:54:00:9f:b9:23 Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:default-k8s-diff-port-751306 Clientid:01:52:54:00:9f:b9:23}
	I0729 14:43:34.876765 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | domain default-k8s-diff-port-751306 has defined IP address 192.168.72.233 and MAC address 52:54:00:9f:b9:23 in network mk-default-k8s-diff-port-751306
	I0729 14:43:34.876952 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHPort
	I0729 14:43:34.877132 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHKeyPath
	I0729 14:43:34.877267 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .GetSSHUsername
	I0729 14:43:34.877375 1039440 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/default-k8s-diff-port-751306/id_rsa Username:docker}
	I0729 14:43:35.022897 1039440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:43:35.044537 1039440 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-751306" to be "Ready" ...
	I0729 14:43:35.057697 1039440 node_ready.go:49] node "default-k8s-diff-port-751306" has status "Ready":"True"
	I0729 14:43:35.057729 1039440 node_ready.go:38] duration metric: took 13.149458ms for node "default-k8s-diff-port-751306" to be "Ready" ...
	I0729 14:43:35.057744 1039440 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:43:35.073050 1039440 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7qhqh" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:35.150661 1039440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 14:43:35.170721 1039440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:43:35.228871 1039440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 14:43:35.228903 1039440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 14:43:35.276845 1039440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 14:43:35.276880 1039440 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 14:43:35.335623 1039440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:43:35.335656 1039440 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 14:43:35.407804 1039440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:43:35.446540 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.446567 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.446927 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:35.446959 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.446972 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:35.446985 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.446991 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.447286 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.447307 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:35.454199 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.454216 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.454476 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.454495 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:35.824592 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.824615 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.825058 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:35.825441 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.825525 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:35.825567 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:35.825576 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:35.827444 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:35.827454 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:35.827465 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:36.331175 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:36.331202 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:36.331575 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:36.331597 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:36.331607 1039440 main.go:141] libmachine: Making call to close driver server
	I0729 14:43:36.331616 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) Calling .Close
	I0729 14:43:36.331623 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:36.331923 1039440 main.go:141] libmachine: (default-k8s-diff-port-751306) DBG | Closing plugin on server side
	I0729 14:43:36.331961 1039440 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:43:36.331986 1039440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:43:36.332003 1039440 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-751306"
	I0729 14:43:36.333995 1039440 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 14:43:36.335441 1039440 addons.go:510] duration metric: took 1.53029708s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 14:43:37.081992 1039440 pod_ready.go:92] pod "coredns-7db6d8ff4d-7qhqh" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.082019 1039440 pod_ready.go:81] duration metric: took 2.008931409s for pod "coredns-7db6d8ff4d-7qhqh" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.082031 1039440 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zxmwx" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.086173 1039440 pod_ready.go:92] pod "coredns-7db6d8ff4d-zxmwx" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.086194 1039440 pod_ready.go:81] duration metric: took 4.154163ms for pod "coredns-7db6d8ff4d-zxmwx" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.086203 1039440 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.090617 1039440 pod_ready.go:92] pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.090636 1039440 pod_ready.go:81] duration metric: took 4.42625ms for pod "etcd-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.090647 1039440 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.094929 1039440 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.094950 1039440 pod_ready.go:81] duration metric: took 4.296245ms for pod "kube-apiserver-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.094962 1039440 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.099462 1039440 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.099483 1039440 pod_ready.go:81] duration metric: took 4.513354ms for pod "kube-controller-manager-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.099495 1039440 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tqtjx" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.478252 1039440 pod_ready.go:92] pod "kube-proxy-tqtjx" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.478281 1039440 pod_ready.go:81] duration metric: took 378.778206ms for pod "kube-proxy-tqtjx" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.478295 1039440 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.878655 1039440 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace has status "Ready":"True"
	I0729 14:43:37.878678 1039440 pod_ready.go:81] duration metric: took 400.374407ms for pod "kube-scheduler-default-k8s-diff-port-751306" in "kube-system" namespace to be "Ready" ...
	I0729 14:43:37.878686 1039440 pod_ready.go:38] duration metric: took 2.820929833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:43:37.878702 1039440 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:43:37.878752 1039440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:43:37.894699 1039440 api_server.go:72] duration metric: took 3.08960429s to wait for apiserver process to appear ...
	I0729 14:43:37.894730 1039440 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:43:37.894767 1039440 api_server.go:253] Checking apiserver healthz at https://192.168.72.233:8444/healthz ...
	I0729 14:43:37.899710 1039440 api_server.go:279] https://192.168.72.233:8444/healthz returned 200:
	ok
	I0729 14:43:37.900733 1039440 api_server.go:141] control plane version: v1.30.3
	I0729 14:43:37.900757 1039440 api_server.go:131] duration metric: took 6.019707ms to wait for apiserver health ...
	I0729 14:43:37.900765 1039440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:43:38.083157 1039440 system_pods.go:59] 9 kube-system pods found
	I0729 14:43:38.083197 1039440 system_pods.go:61] "coredns-7db6d8ff4d-7qhqh" [88941d43-c67d-4190-896c-edfc4c96b9a8] Running
	I0729 14:43:38.083204 1039440 system_pods.go:61] "coredns-7db6d8ff4d-zxmwx" [13b78c9b-97dc-4313-92d1-76fab481b276] Running
	I0729 14:43:38.083210 1039440 system_pods.go:61] "etcd-default-k8s-diff-port-751306" [11d5216e-a3e3-4ac8-9b00-1b1b04bb1c3e] Running
	I0729 14:43:38.083215 1039440 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-751306" [f9f539b1-374e-4214-b4ac-d6bcb60ca022] Running
	I0729 14:43:38.083221 1039440 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-751306" [07af9a19-2d14-4727-b7b0-ad2f297c1d1a] Running
	I0729 14:43:38.083226 1039440 system_pods.go:61] "kube-proxy-tqtjx" [bd100e13-d714-4ddb-ba43-44be43035b3f] Running
	I0729 14:43:38.083231 1039440 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-751306" [03603694-d75d-4073-8ce9-0ed9bbbe150a] Running
	I0729 14:43:38.083240 1039440 system_pods.go:61] "metrics-server-569cc877fc-z9wg5" [f022dfec-8e97-4679-a7dd-739c9231af82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:43:38.083246 1039440 system_pods.go:61] "storage-provisioner" [a8bf282a-27e8-43f9-a2ac-af6000a4decc] Running
	I0729 14:43:38.083255 1039440 system_pods.go:74] duration metric: took 182.484884ms to wait for pod list to return data ...
	I0729 14:43:38.083269 1039440 default_sa.go:34] waiting for default service account to be created ...
	I0729 14:43:38.277387 1039440 default_sa.go:45] found service account: "default"
	I0729 14:43:38.277418 1039440 default_sa.go:55] duration metric: took 194.142035ms for default service account to be created ...
	I0729 14:43:38.277429 1039440 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 14:43:38.481158 1039440 system_pods.go:86] 9 kube-system pods found
	I0729 14:43:38.481194 1039440 system_pods.go:89] "coredns-7db6d8ff4d-7qhqh" [88941d43-c67d-4190-896c-edfc4c96b9a8] Running
	I0729 14:43:38.481202 1039440 system_pods.go:89] "coredns-7db6d8ff4d-zxmwx" [13b78c9b-97dc-4313-92d1-76fab481b276] Running
	I0729 14:43:38.481210 1039440 system_pods.go:89] "etcd-default-k8s-diff-port-751306" [11d5216e-a3e3-4ac8-9b00-1b1b04bb1c3e] Running
	I0729 14:43:38.481217 1039440 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-751306" [f9f539b1-374e-4214-b4ac-d6bcb60ca022] Running
	I0729 14:43:38.481225 1039440 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-751306" [07af9a19-2d14-4727-b7b0-ad2f297c1d1a] Running
	I0729 14:43:38.481230 1039440 system_pods.go:89] "kube-proxy-tqtjx" [bd100e13-d714-4ddb-ba43-44be43035b3f] Running
	I0729 14:43:38.481236 1039440 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-751306" [03603694-d75d-4073-8ce9-0ed9bbbe150a] Running
	I0729 14:43:38.481248 1039440 system_pods.go:89] "metrics-server-569cc877fc-z9wg5" [f022dfec-8e97-4679-a7dd-739c9231af82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:43:38.481255 1039440 system_pods.go:89] "storage-provisioner" [a8bf282a-27e8-43f9-a2ac-af6000a4decc] Running
	I0729 14:43:38.481267 1039440 system_pods.go:126] duration metric: took 203.830126ms to wait for k8s-apps to be running ...
	I0729 14:43:38.481280 1039440 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 14:43:38.481329 1039440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:43:38.496175 1039440 system_svc.go:56] duration metric: took 14.88714ms WaitForService to wait for kubelet
	I0729 14:43:38.496209 1039440 kubeadm.go:582] duration metric: took 3.691120463s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:43:38.496237 1039440 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:43:38.677820 1039440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:43:38.677847 1039440 node_conditions.go:123] node cpu capacity is 2
	I0729 14:43:38.677859 1039440 node_conditions.go:105] duration metric: took 181.616437ms to run NodePressure ...
	I0729 14:43:38.677874 1039440 start.go:241] waiting for startup goroutines ...
	I0729 14:43:38.677882 1039440 start.go:246] waiting for cluster config update ...
	I0729 14:43:38.677894 1039440 start.go:255] writing updated cluster config ...
	I0729 14:43:38.678166 1039440 ssh_runner.go:195] Run: rm -f paused
	I0729 14:43:38.728616 1039440 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 14:43:38.730494 1039440 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-751306" cluster and "default" namespace by default
	I0729 14:43:40.128244 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:43:40.128447 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:43:55.945251 1038758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.277690166s)
	I0729 14:43:55.945335 1038758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:43:55.960870 1038758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 14:43:55.971175 1038758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:43:55.981424 1038758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:43:55.981456 1038758 kubeadm.go:157] found existing configuration files:
	
	I0729 14:43:55.981512 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:43:55.992098 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:43:55.992165 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:43:56.002242 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:43:56.011416 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:43:56.011486 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:43:56.020848 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:43:56.030219 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:43:56.030280 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:43:56.039957 1038758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:43:56.049607 1038758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:43:56.049670 1038758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:43:56.059413 1038758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:43:56.109453 1038758 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 14:43:56.109563 1038758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:43:56.230876 1038758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:43:56.231018 1038758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:43:56.231126 1038758 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 14:43:56.244355 1038758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:43:56.246461 1038758 out.go:204]   - Generating certificates and keys ...
	I0729 14:43:56.246573 1038758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:43:56.246666 1038758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:43:56.246755 1038758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:43:56.246843 1038758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:43:56.246964 1038758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:43:56.247169 1038758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:43:56.247267 1038758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:43:56.247365 1038758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:43:56.247473 1038758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:43:56.247588 1038758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:43:56.247646 1038758 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:43:56.247718 1038758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:43:56.593641 1038758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:43:56.714510 1038758 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 14:43:56.862780 1038758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:43:57.010367 1038758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:43:57.108324 1038758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:43:57.109028 1038758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:43:57.111425 1038758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:43:57.113088 1038758 out.go:204]   - Booting up control plane ...
	I0729 14:43:57.113217 1038758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:43:57.113336 1038758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:43:57.113501 1038758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:43:57.135168 1038758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:43:57.141915 1038758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:43:57.142022 1038758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:43:57.269947 1038758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 14:43:57.270056 1038758 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 14:43:57.772110 1038758 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.03343ms
	I0729 14:43:57.772229 1038758 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 14:44:02.773898 1038758 kubeadm.go:310] [api-check] The API server is healthy after 5.00168383s
	I0729 14:44:02.788629 1038758 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 14:44:02.805813 1038758 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 14:44:02.831687 1038758 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 14:44:02.831963 1038758 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-603534 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 14:44:02.842427 1038758 kubeadm.go:310] [bootstrap-token] Using token: hg3j3v.551bb9ju0g9ic9e6
	I0729 14:44:00.129004 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:44:00.129267 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:44:02.844018 1038758 out.go:204]   - Configuring RBAC rules ...
	I0729 14:44:02.844160 1038758 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 14:44:02.851693 1038758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 14:44:02.859496 1038758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 14:44:02.863556 1038758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 14:44:02.866896 1038758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 14:44:02.871375 1038758 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 14:44:03.181687 1038758 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 14:44:03.618445 1038758 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 14:44:04.184562 1038758 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 14:44:04.185548 1038758 kubeadm.go:310] 
	I0729 14:44:04.185655 1038758 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 14:44:04.185689 1038758 kubeadm.go:310] 
	I0729 14:44:04.185788 1038758 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 14:44:04.185801 1038758 kubeadm.go:310] 
	I0729 14:44:04.185825 1038758 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 14:44:04.185906 1038758 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 14:44:04.185983 1038758 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 14:44:04.185992 1038758 kubeadm.go:310] 
	I0729 14:44:04.186079 1038758 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 14:44:04.186090 1038758 kubeadm.go:310] 
	I0729 14:44:04.186155 1038758 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 14:44:04.186165 1038758 kubeadm.go:310] 
	I0729 14:44:04.186231 1038758 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 14:44:04.186337 1038758 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 14:44:04.186431 1038758 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 14:44:04.186441 1038758 kubeadm.go:310] 
	I0729 14:44:04.186575 1038758 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 14:44:04.186679 1038758 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 14:44:04.186689 1038758 kubeadm.go:310] 
	I0729 14:44:04.186810 1038758 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hg3j3v.551bb9ju0g9ic9e6 \
	I0729 14:44:04.186944 1038758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 \
	I0729 14:44:04.186974 1038758 kubeadm.go:310] 	--control-plane 
	I0729 14:44:04.186984 1038758 kubeadm.go:310] 
	I0729 14:44:04.187102 1038758 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 14:44:04.187111 1038758 kubeadm.go:310] 
	I0729 14:44:04.187224 1038758 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hg3j3v.551bb9ju0g9ic9e6 \
	I0729 14:44:04.187375 1038758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eeafd943d4359c61c99f068b67c5c2fc0405054ca81f6f4eb33277fb51322477 
	I0729 14:44:04.188377 1038758 kubeadm.go:310] W0729 14:43:56.090027    2906 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 14:44:04.188711 1038758 kubeadm.go:310] W0729 14:43:56.090887    2906 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 14:44:04.188834 1038758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:44:04.188852 1038758 cni.go:84] Creating CNI manager for ""
	I0729 14:44:04.188863 1038758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 14:44:04.190535 1038758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 14:44:04.191948 1038758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 14:44:04.203414 1038758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 14:44:04.223025 1038758 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 14:44:04.223114 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:04.223132 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-603534 minikube.k8s.io/updated_at=2024_07_29T14_44_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=75dd5ed86f7940da98b5fdb592bb8258c6e30411 minikube.k8s.io/name=no-preload-603534 minikube.k8s.io/primary=true
	I0729 14:44:04.240353 1038758 ops.go:34] apiserver oom_adj: -16
	I0729 14:44:04.442077 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:04.942458 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:05.442843 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:05.942138 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:06.442232 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:06.942611 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:07.442939 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:07.942661 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:08.443044 1038758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 14:44:08.522590 1038758 kubeadm.go:1113] duration metric: took 4.299548803s to wait for elevateKubeSystemPrivileges
	I0729 14:44:08.522633 1038758 kubeadm.go:394] duration metric: took 5m0.491164642s to StartCluster
	I0729 14:44:08.522657 1038758 settings.go:142] acquiring lock: {Name:mke61e73d7bb1a5bd9c2f4c9e9bba0a07b199ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:44:08.522755 1038758 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:44:08.524573 1038758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-974764/kubeconfig: {Name:mk3101cfd1aa9ed7ba350fc15cc31c47309fcefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 14:44:08.524893 1038758 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 14:44:08.524999 1038758 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 14:44:08.525112 1038758 addons.go:69] Setting storage-provisioner=true in profile "no-preload-603534"
	I0729 14:44:08.525150 1038758 addons.go:234] Setting addon storage-provisioner=true in "no-preload-603534"
	I0729 14:44:08.525146 1038758 addons.go:69] Setting default-storageclass=true in profile "no-preload-603534"
	I0729 14:44:08.525155 1038758 config.go:182] Loaded profile config "no-preload-603534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 14:44:08.525167 1038758 addons.go:69] Setting metrics-server=true in profile "no-preload-603534"
	I0729 14:44:08.525182 1038758 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-603534"
	W0729 14:44:08.525162 1038758 addons.go:243] addon storage-provisioner should already be in state true
	I0729 14:44:08.525229 1038758 host.go:66] Checking if "no-preload-603534" exists ...
	I0729 14:44:08.525185 1038758 addons.go:234] Setting addon metrics-server=true in "no-preload-603534"
	W0729 14:44:08.525264 1038758 addons.go:243] addon metrics-server should already be in state true
	I0729 14:44:08.525294 1038758 host.go:66] Checking if "no-preload-603534" exists ...
	I0729 14:44:08.525510 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.525553 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.525652 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.525668 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.525688 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.525715 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.526581 1038758 out.go:177] * Verifying Kubernetes components...
	I0729 14:44:08.527919 1038758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 14:44:08.541874 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43521
	I0729 14:44:08.542126 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34697
	I0729 14:44:08.542251 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35699
	I0729 14:44:08.542397 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.542505 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.542664 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.542948 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.542969 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.543075 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.543090 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.543115 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.543127 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.543323 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.543546 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.543551 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.543758 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.543779 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.544014 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.544035 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.544149 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:44:08.548026 1038758 addons.go:234] Setting addon default-storageclass=true in "no-preload-603534"
	W0729 14:44:08.548048 1038758 addons.go:243] addon default-storageclass should already be in state true
	I0729 14:44:08.548079 1038758 host.go:66] Checking if "no-preload-603534" exists ...
	I0729 14:44:08.548457 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.548478 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.559699 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36211
	I0729 14:44:08.560297 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.560916 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.560953 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.561332 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.561519 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:44:08.563422 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:44:08.564073 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42429
	I0729 14:44:08.564524 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.565011 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.565038 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.565427 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.565592 1038758 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 14:44:08.565752 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:44:08.566901 1038758 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 14:44:08.566921 1038758 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 14:44:08.566941 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:44:08.567688 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:44:08.568067 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34485
	I0729 14:44:08.568443 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.569019 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.569040 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.569462 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.569583 1038758 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 14:44:08.570038 1038758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 14:44:08.570074 1038758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 14:44:08.571187 1038758 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:44:08.571204 1038758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 14:44:08.571223 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:44:08.571595 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.572203 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:44:08.572247 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.572506 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:44:08.572704 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:44:08.572893 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:44:08.573100 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:44:08.574551 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.574900 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:44:08.574919 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.575074 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:44:08.575286 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:44:08.575427 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:44:08.575551 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:44:08.585902 1038758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40045
	I0729 14:44:08.586319 1038758 main.go:141] libmachine: () Calling .GetVersion
	I0729 14:44:08.586778 1038758 main.go:141] libmachine: Using API Version  1
	I0729 14:44:08.586803 1038758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 14:44:08.587135 1038758 main.go:141] libmachine: () Calling .GetMachineName
	I0729 14:44:08.587357 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetState
	I0729 14:44:08.588606 1038758 main.go:141] libmachine: (no-preload-603534) Calling .DriverName
	I0729 14:44:08.588827 1038758 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 14:44:08.588844 1038758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 14:44:08.588861 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHHostname
	I0729 14:44:08.591169 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.591434 1038758 main.go:141] libmachine: (no-preload-603534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:94:45", ip: ""} in network mk-no-preload-603534: {Iface:virbr1 ExpiryTime:2024-07-29 15:38:42 +0000 UTC Type:0 Mac:52:54:00:bf:94:45 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-603534 Clientid:01:52:54:00:bf:94:45}
	I0729 14:44:08.591466 1038758 main.go:141] libmachine: (no-preload-603534) DBG | domain no-preload-603534 has defined IP address 192.168.61.116 and MAC address 52:54:00:bf:94:45 in network mk-no-preload-603534
	I0729 14:44:08.591600 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHPort
	I0729 14:44:08.591766 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHKeyPath
	I0729 14:44:08.591873 1038758 main.go:141] libmachine: (no-preload-603534) Calling .GetSSHUsername
	I0729 14:44:08.592103 1038758 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/no-preload-603534/id_rsa Username:docker}
	I0729 14:44:08.752015 1038758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 14:44:08.775498 1038758 node_ready.go:35] waiting up to 6m0s for node "no-preload-603534" to be "Ready" ...
	I0729 14:44:08.788547 1038758 node_ready.go:49] node "no-preload-603534" has status "Ready":"True"
	I0729 14:44:08.788572 1038758 node_ready.go:38] duration metric: took 13.040411ms for node "no-preload-603534" to be "Ready" ...
	I0729 14:44:08.788582 1038758 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:44:08.793475 1038758 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:08.861468 1038758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 14:44:08.869542 1038758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 14:44:08.869567 1038758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 14:44:08.898398 1038758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 14:44:08.911120 1038758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 14:44:08.911148 1038758 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 14:44:08.931151 1038758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:44:08.931179 1038758 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 14:44:08.976093 1038758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 14:44:09.449857 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.449885 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.449863 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.449958 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.450343 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.450354 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.450361 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.450373 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.450374 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.450389 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.450442 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.450455 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.450476 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.450487 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.450620 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.450635 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.450637 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.450779 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.450799 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.493934 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.493959 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.494303 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.494320 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.494342 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.706038 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.706072 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.706366 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.706382 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.706391 1038758 main.go:141] libmachine: Making call to close driver server
	I0729 14:44:09.706398 1038758 main.go:141] libmachine: (no-preload-603534) Calling .Close
	I0729 14:44:09.707956 1038758 main.go:141] libmachine: (no-preload-603534) DBG | Closing plugin on server side
	I0729 14:44:09.707958 1038758 main.go:141] libmachine: Successfully made call to close driver server
	I0729 14:44:09.707986 1038758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 14:44:09.708015 1038758 addons.go:475] Verifying addon metrics-server=true in "no-preload-603534"
	I0729 14:44:09.709729 1038758 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 14:44:09.711283 1038758 addons.go:510] duration metric: took 1.186289164s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 14:44:10.807976 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"False"
	I0729 14:44:13.300325 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"False"
	I0729 14:44:15.800886 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"False"
	I0729 14:44:18.300042 1038758 pod_ready.go:102] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"False"
	I0729 14:44:18.800080 1038758 pod_ready.go:92] pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.800111 1038758 pod_ready.go:81] duration metric: took 10.006613711s for pod "coredns-5cfdc65f69-m6q8r" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.800124 1038758 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-vn8z4" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.804949 1038758 pod_ready.go:92] pod "coredns-5cfdc65f69-vn8z4" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.804974 1038758 pod_ready.go:81] duration metric: took 4.840477ms for pod "coredns-5cfdc65f69-vn8z4" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.804985 1038758 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.810160 1038758 pod_ready.go:92] pod "etcd-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.810176 1038758 pod_ready.go:81] duration metric: took 5.184516ms for pod "etcd-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.810185 1038758 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.814785 1038758 pod_ready.go:92] pod "kube-apiserver-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.814807 1038758 pod_ready.go:81] duration metric: took 4.615516ms for pod "kube-apiserver-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.814819 1038758 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.819023 1038758 pod_ready.go:92] pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:18.819044 1038758 pod_ready.go:81] duration metric: took 4.215656ms for pod "kube-controller-manager-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:18.819056 1038758 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7mr4z" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:19.198226 1038758 pod_ready.go:92] pod "kube-proxy-7mr4z" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:19.198252 1038758 pod_ready.go:81] duration metric: took 379.18928ms for pod "kube-proxy-7mr4z" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:19.198265 1038758 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:19.598783 1038758 pod_ready.go:92] pod "kube-scheduler-no-preload-603534" in "kube-system" namespace has status "Ready":"True"
	I0729 14:44:19.598824 1038758 pod_ready.go:81] duration metric: took 400.55255ms for pod "kube-scheduler-no-preload-603534" in "kube-system" namespace to be "Ready" ...
	I0729 14:44:19.598835 1038758 pod_ready.go:38] duration metric: took 10.810240266s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 14:44:19.598865 1038758 api_server.go:52] waiting for apiserver process to appear ...
	I0729 14:44:19.598931 1038758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 14:44:19.615165 1038758 api_server.go:72] duration metric: took 11.090236578s to wait for apiserver process to appear ...
	I0729 14:44:19.615190 1038758 api_server.go:88] waiting for apiserver healthz status ...
	I0729 14:44:19.615211 1038758 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0729 14:44:19.619574 1038758 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0729 14:44:19.620586 1038758 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 14:44:19.620610 1038758 api_server.go:131] duration metric: took 5.412598ms to wait for apiserver health ...
	I0729 14:44:19.620620 1038758 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 14:44:19.802376 1038758 system_pods.go:59] 9 kube-system pods found
	I0729 14:44:19.802408 1038758 system_pods.go:61] "coredns-5cfdc65f69-m6q8r" [b3a0c38d-1587-4fdf-b2e6-58d364ca400b] Running
	I0729 14:44:19.802415 1038758 system_pods.go:61] "coredns-5cfdc65f69-vn8z4" [4654aadf-7870-46b6-96e6-5948239fbe22] Running
	I0729 14:44:19.802420 1038758 system_pods.go:61] "etcd-no-preload-603534" [01737765-56ad-4305-aa98-d531dd1fadb4] Running
	I0729 14:44:19.802429 1038758 system_pods.go:61] "kube-apiserver-no-preload-603534" [141fffbe-df4b-4de1-9d78-f1acf0b837a6] Running
	I0729 14:44:19.802434 1038758 system_pods.go:61] "kube-controller-manager-no-preload-603534" [39c980ec-50f7-4af1-b931-1a446775c934] Running
	I0729 14:44:19.802441 1038758 system_pods.go:61] "kube-proxy-7mr4z" [17de173c-2b95-4b35-a9d7-b38f065270cb] Running
	I0729 14:44:19.802446 1038758 system_pods.go:61] "kube-scheduler-no-preload-603534" [8d896d6c-43b9-4bc8-9994-41b0bd4b636d] Running
	I0729 14:44:19.802454 1038758 system_pods.go:61] "metrics-server-78fcd8795b-852x6" [637fea9b-2924-4593-a4e2-99a33ab613d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:44:19.802470 1038758 system_pods.go:61] "storage-provisioner" [7336eb38-d53d-4456-8367-cf843abe5cb5] Running
	I0729 14:44:19.802482 1038758 system_pods.go:74] duration metric: took 181.853357ms to wait for pod list to return data ...
	I0729 14:44:19.802491 1038758 default_sa.go:34] waiting for default service account to be created ...
	I0729 14:44:19.998312 1038758 default_sa.go:45] found service account: "default"
	I0729 14:44:19.998348 1038758 default_sa.go:55] duration metric: took 195.845187ms for default service account to be created ...
	I0729 14:44:19.998361 1038758 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 14:44:20.201742 1038758 system_pods.go:86] 9 kube-system pods found
	I0729 14:44:20.201778 1038758 system_pods.go:89] "coredns-5cfdc65f69-m6q8r" [b3a0c38d-1587-4fdf-b2e6-58d364ca400b] Running
	I0729 14:44:20.201787 1038758 system_pods.go:89] "coredns-5cfdc65f69-vn8z4" [4654aadf-7870-46b6-96e6-5948239fbe22] Running
	I0729 14:44:20.201793 1038758 system_pods.go:89] "etcd-no-preload-603534" [01737765-56ad-4305-aa98-d531dd1fadb4] Running
	I0729 14:44:20.201800 1038758 system_pods.go:89] "kube-apiserver-no-preload-603534" [141fffbe-df4b-4de1-9d78-f1acf0b837a6] Running
	I0729 14:44:20.201807 1038758 system_pods.go:89] "kube-controller-manager-no-preload-603534" [39c980ec-50f7-4af1-b931-1a446775c934] Running
	I0729 14:44:20.201812 1038758 system_pods.go:89] "kube-proxy-7mr4z" [17de173c-2b95-4b35-a9d7-b38f065270cb] Running
	I0729 14:44:20.201818 1038758 system_pods.go:89] "kube-scheduler-no-preload-603534" [8d896d6c-43b9-4bc8-9994-41b0bd4b636d] Running
	I0729 14:44:20.201826 1038758 system_pods.go:89] "metrics-server-78fcd8795b-852x6" [637fea9b-2924-4593-a4e2-99a33ab613d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 14:44:20.201835 1038758 system_pods.go:89] "storage-provisioner" [7336eb38-d53d-4456-8367-cf843abe5cb5] Running
	I0729 14:44:20.201850 1038758 system_pods.go:126] duration metric: took 203.481528ms to wait for k8s-apps to be running ...
	I0729 14:44:20.201860 1038758 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 14:44:20.201914 1038758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:44:20.217416 1038758 system_svc.go:56] duration metric: took 15.543768ms WaitForService to wait for kubelet
	I0729 14:44:20.217445 1038758 kubeadm.go:582] duration metric: took 11.692521258s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 14:44:20.217464 1038758 node_conditions.go:102] verifying NodePressure condition ...
	I0729 14:44:20.398667 1038758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 14:44:20.398696 1038758 node_conditions.go:123] node cpu capacity is 2
	I0729 14:44:20.398708 1038758 node_conditions.go:105] duration metric: took 181.238886ms to run NodePressure ...
	I0729 14:44:20.398720 1038758 start.go:241] waiting for startup goroutines ...
	I0729 14:44:20.398727 1038758 start.go:246] waiting for cluster config update ...
	I0729 14:44:20.398738 1038758 start.go:255] writing updated cluster config ...
	I0729 14:44:20.399014 1038758 ssh_runner.go:195] Run: rm -f paused
	I0729 14:44:20.452187 1038758 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 14:44:20.454434 1038758 out.go:177] * Done! kubectl is now configured to use "no-preload-603534" cluster and "default" namespace by default
	I0729 14:44:40.130597 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:44:40.130831 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:44:40.130848 1039759 kubeadm.go:310] 
	I0729 14:44:40.130903 1039759 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 14:44:40.130956 1039759 kubeadm.go:310] 		timed out waiting for the condition
	I0729 14:44:40.130966 1039759 kubeadm.go:310] 
	I0729 14:44:40.131032 1039759 kubeadm.go:310] 	This error is likely caused by:
	I0729 14:44:40.131110 1039759 kubeadm.go:310] 		- The kubelet is not running
	I0729 14:44:40.131256 1039759 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 14:44:40.131270 1039759 kubeadm.go:310] 
	I0729 14:44:40.131450 1039759 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 14:44:40.131499 1039759 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 14:44:40.131542 1039759 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 14:44:40.131552 1039759 kubeadm.go:310] 
	I0729 14:44:40.131686 1039759 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 14:44:40.131795 1039759 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 14:44:40.131806 1039759 kubeadm.go:310] 
	I0729 14:44:40.131947 1039759 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 14:44:40.132064 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 14:44:40.132162 1039759 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 14:44:40.132254 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 14:44:40.132264 1039759 kubeadm.go:310] 
	I0729 14:44:40.133208 1039759 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:44:40.133363 1039759 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 14:44:40.133468 1039759 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 14:44:40.133610 1039759 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 14:44:40.133676 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 14:44:40.607039 1039759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 14:44:40.623771 1039759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 14:44:40.636278 1039759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 14:44:40.636310 1039759 kubeadm.go:157] found existing configuration files:
	
	I0729 14:44:40.636371 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 14:44:40.647768 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 14:44:40.647827 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 14:44:40.658281 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 14:44:40.668393 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 14:44:40.668477 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 14:44:40.678521 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 14:44:40.687891 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 14:44:40.687960 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 14:44:40.698384 1039759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 14:44:40.708965 1039759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 14:44:40.709047 1039759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 14:44:40.719665 1039759 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 14:44:40.796786 1039759 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 14:44:40.796883 1039759 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 14:44:40.946106 1039759 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 14:44:40.946258 1039759 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 14:44:40.946388 1039759 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 14:44:41.140483 1039759 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 14:44:41.142390 1039759 out.go:204]   - Generating certificates and keys ...
	I0729 14:44:41.142503 1039759 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 14:44:41.142610 1039759 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 14:44:41.142722 1039759 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 14:44:41.142811 1039759 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 14:44:41.142910 1039759 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 14:44:41.142995 1039759 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 14:44:41.143092 1039759 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 14:44:41.143180 1039759 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 14:44:41.143279 1039759 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 14:44:41.143390 1039759 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 14:44:41.143445 1039759 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 14:44:41.143524 1039759 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 14:44:41.188854 1039759 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 14:44:41.329957 1039759 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 14:44:41.968599 1039759 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 14:44:42.034788 1039759 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 14:44:42.055543 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 14:44:42.056622 1039759 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 14:44:42.056715 1039759 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 14:44:42.204165 1039759 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 14:44:42.205935 1039759 out.go:204]   - Booting up control plane ...
	I0729 14:44:42.206076 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 14:44:42.216259 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 14:44:42.217947 1039759 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 14:44:42.219361 1039759 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 14:44:42.221672 1039759 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 14:45:22.223830 1039759 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 14:45:22.223940 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:22.224139 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:45:27.224303 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:27.224574 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:45:37.224905 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:37.225090 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:45:57.226285 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:45:57.226533 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:46:37.227279 1039759 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 14:46:37.227485 1039759 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 14:46:37.227494 1039759 kubeadm.go:310] 
	I0729 14:46:37.227528 1039759 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 14:46:37.227605 1039759 kubeadm.go:310] 		timed out waiting for the condition
	I0729 14:46:37.227627 1039759 kubeadm.go:310] 
	I0729 14:46:37.227683 1039759 kubeadm.go:310] 	This error is likely caused by:
	I0729 14:46:37.227732 1039759 kubeadm.go:310] 		- The kubelet is not running
	I0729 14:46:37.227861 1039759 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 14:46:37.227870 1039759 kubeadm.go:310] 
	I0729 14:46:37.228011 1039759 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 14:46:37.228093 1039759 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 14:46:37.228140 1039759 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 14:46:37.228173 1039759 kubeadm.go:310] 
	I0729 14:46:37.228310 1039759 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 14:46:37.228443 1039759 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 14:46:37.228454 1039759 kubeadm.go:310] 
	I0729 14:46:37.228606 1039759 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 14:46:37.228714 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 14:46:37.228821 1039759 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 14:46:37.228913 1039759 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 14:46:37.228934 1039759 kubeadm.go:310] 
	I0729 14:46:37.229926 1039759 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 14:46:37.230070 1039759 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 14:46:37.230175 1039759 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 14:46:37.230284 1039759 kubeadm.go:394] duration metric: took 7m57.608522587s to StartCluster
	I0729 14:46:37.230347 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 14:46:37.230435 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 14:46:37.276238 1039759 cri.go:89] found id: ""
	I0729 14:46:37.276294 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.276304 1039759 logs.go:278] No container was found matching "kube-apiserver"
	I0729 14:46:37.276317 1039759 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 14:46:37.276439 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 14:46:37.309934 1039759 cri.go:89] found id: ""
	I0729 14:46:37.309960 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.309969 1039759 logs.go:278] No container was found matching "etcd"
	I0729 14:46:37.309975 1039759 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 14:46:37.310031 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 14:46:37.343286 1039759 cri.go:89] found id: ""
	I0729 14:46:37.343312 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.343320 1039759 logs.go:278] No container was found matching "coredns"
	I0729 14:46:37.343325 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 14:46:37.343384 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 14:46:37.378735 1039759 cri.go:89] found id: ""
	I0729 14:46:37.378763 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.378773 1039759 logs.go:278] No container was found matching "kube-scheduler"
	I0729 14:46:37.378779 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 14:46:37.378834 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 14:46:37.414244 1039759 cri.go:89] found id: ""
	I0729 14:46:37.414275 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.414284 1039759 logs.go:278] No container was found matching "kube-proxy"
	I0729 14:46:37.414290 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 14:46:37.414372 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 14:46:37.453809 1039759 cri.go:89] found id: ""
	I0729 14:46:37.453842 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.453858 1039759 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 14:46:37.453866 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 14:46:37.453955 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 14:46:37.492250 1039759 cri.go:89] found id: ""
	I0729 14:46:37.492279 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.492288 1039759 logs.go:278] No container was found matching "kindnet"
	I0729 14:46:37.492294 1039759 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 14:46:37.492360 1039759 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 14:46:37.554342 1039759 cri.go:89] found id: ""
	I0729 14:46:37.554377 1039759 logs.go:276] 0 containers: []
	W0729 14:46:37.554388 1039759 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 14:46:37.554404 1039759 logs.go:123] Gathering logs for kubelet ...
	I0729 14:46:37.554422 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 14:46:37.631118 1039759 logs.go:123] Gathering logs for dmesg ...
	I0729 14:46:37.631165 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 14:46:37.650991 1039759 logs.go:123] Gathering logs for describe nodes ...
	I0729 14:46:37.651047 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 14:46:37.731852 1039759 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 14:46:37.731880 1039759 logs.go:123] Gathering logs for CRI-O ...
	I0729 14:46:37.731897 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 14:46:37.849049 1039759 logs.go:123] Gathering logs for container status ...
	I0729 14:46:37.849092 1039759 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0729 14:46:37.893957 1039759 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 14:46:37.894031 1039759 out.go:239] * 
	W0729 14:46:37.894120 1039759 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 14:46:37.894150 1039759 out.go:239] * 
	W0729 14:46:37.895278 1039759 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 14:46:37.898735 1039759 out.go:177] 
	W0729 14:46:37.900049 1039759 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 14:46:37.900115 1039759 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 14:46:37.900146 1039759 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 14:46:37.901531 1039759 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 14:58:32 old-k8s-version-360866 crio[647]: time="2024-07-29 14:58:32.974038552Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722265112974002762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1acdf1e6-b1b2-4ba8-bdec-2ab53e429d2b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:58:32 old-k8s-version-360866 crio[647]: time="2024-07-29 14:58:32.974739836Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c20710b3-19ac-4685-a22b-e3ef623c614f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:58:32 old-k8s-version-360866 crio[647]: time="2024-07-29 14:58:32.974800100Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c20710b3-19ac-4685-a22b-e3ef623c614f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:58:32 old-k8s-version-360866 crio[647]: time="2024-07-29 14:58:32.974835670Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c20710b3-19ac-4685-a22b-e3ef623c614f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:58:33 old-k8s-version-360866 crio[647]: time="2024-07-29 14:58:33.010117370Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e135116b-566d-4320-abca-852ca5750300 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:58:33 old-k8s-version-360866 crio[647]: time="2024-07-29 14:58:33.010215327Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e135116b-566d-4320-abca-852ca5750300 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:58:33 old-k8s-version-360866 crio[647]: time="2024-07-29 14:58:33.011571816Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2052b28c-e65e-4f57-9e7f-9b1ef5739f6d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:58:33 old-k8s-version-360866 crio[647]: time="2024-07-29 14:58:33.012025135Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722265113012002880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2052b28c-e65e-4f57-9e7f-9b1ef5739f6d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:58:33 old-k8s-version-360866 crio[647]: time="2024-07-29 14:58:33.012604494Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43040567-9459-4e77-ad4e-0b4da8965fa4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:58:33 old-k8s-version-360866 crio[647]: time="2024-07-29 14:58:33.012708572Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43040567-9459-4e77-ad4e-0b4da8965fa4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:58:33 old-k8s-version-360866 crio[647]: time="2024-07-29 14:58:33.012745679Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=43040567-9459-4e77-ad4e-0b4da8965fa4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:58:33 old-k8s-version-360866 crio[647]: time="2024-07-29 14:58:33.049681737Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c1732ad5-4225-45e8-8df0-b059d7ee7566 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:58:33 old-k8s-version-360866 crio[647]: time="2024-07-29 14:58:33.049810994Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c1732ad5-4225-45e8-8df0-b059d7ee7566 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:58:33 old-k8s-version-360866 crio[647]: time="2024-07-29 14:58:33.051054464Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6f111cfe-a1b6-423c-9f8a-ead64dc8a727 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:58:33 old-k8s-version-360866 crio[647]: time="2024-07-29 14:58:33.051423861Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722265113051401171,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f111cfe-a1b6-423c-9f8a-ead64dc8a727 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:58:33 old-k8s-version-360866 crio[647]: time="2024-07-29 14:58:33.052102958Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1fa3a17f-d2d4-43f6-8f2e-efbc52a82888 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:58:33 old-k8s-version-360866 crio[647]: time="2024-07-29 14:58:33.052163830Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1fa3a17f-d2d4-43f6-8f2e-efbc52a82888 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:58:33 old-k8s-version-360866 crio[647]: time="2024-07-29 14:58:33.052200019Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1fa3a17f-d2d4-43f6-8f2e-efbc52a82888 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:58:33 old-k8s-version-360866 crio[647]: time="2024-07-29 14:58:33.085247163Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b761f815-a2f2-416d-a20e-18936bcf6bc7 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:58:33 old-k8s-version-360866 crio[647]: time="2024-07-29 14:58:33.085341505Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b761f815-a2f2-416d-a20e-18936bcf6bc7 name=/runtime.v1.RuntimeService/Version
	Jul 29 14:58:33 old-k8s-version-360866 crio[647]: time="2024-07-29 14:58:33.086388024Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6f9be2b2-e2a2-478a-8d1a-d595fdcebe61 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:58:33 old-k8s-version-360866 crio[647]: time="2024-07-29 14:58:33.086811455Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722265113086788312,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f9be2b2-e2a2-478a-8d1a-d595fdcebe61 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 14:58:33 old-k8s-version-360866 crio[647]: time="2024-07-29 14:58:33.087320121Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5c187d5-0d34-4045-b809-2d7d7116ef91 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:58:33 old-k8s-version-360866 crio[647]: time="2024-07-29 14:58:33.087377624Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5c187d5-0d34-4045-b809-2d7d7116ef91 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 14:58:33 old-k8s-version-360866 crio[647]: time="2024-07-29 14:58:33.087408775Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c5c187d5-0d34-4045-b809-2d7d7116ef91 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul29 14:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057215] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.048160] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.116415] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.599440] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.593896] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.539359] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.065084] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070151] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.197280] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.136393] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.263438] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +6.439493] systemd-fstab-generator[833]: Ignoring "noauto" option for root device
	[  +0.060871] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.313828] systemd-fstab-generator[958]: Ignoring "noauto" option for root device
	[ +12.194292] kauditd_printk_skb: 46 callbacks suppressed
	[Jul29 14:42] systemd-fstab-generator[4994]: Ignoring "noauto" option for root device
	[Jul29 14:44] systemd-fstab-generator[5279]: Ignoring "noauto" option for root device
	[  +0.064843] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:58:33 up 20 min,  0 users,  load average: 0.16, 0.07, 0.04
	Linux old-k8s-version-360866 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 29 14:58:30 old-k8s-version-360866 kubelet[6815]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Jul 29 14:58:30 old-k8s-version-360866 kubelet[6815]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Jul 29 14:58:30 old-k8s-version-360866 kubelet[6815]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Jul 29 14:58:30 old-k8s-version-360866 kubelet[6815]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0005d5ef0)
	Jul 29 14:58:30 old-k8s-version-360866 kubelet[6815]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Jul 29 14:58:30 old-k8s-version-360866 kubelet[6815]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c6def0, 0x4f0ac20, 0xc000c01950, 0x1, 0xc0001020c0)
	Jul 29 14:58:30 old-k8s-version-360866 kubelet[6815]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Jul 29 14:58:30 old-k8s-version-360866 kubelet[6815]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00025ac40, 0xc0001020c0)
	Jul 29 14:58:30 old-k8s-version-360866 kubelet[6815]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jul 29 14:58:30 old-k8s-version-360866 kubelet[6815]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jul 29 14:58:30 old-k8s-version-360866 kubelet[6815]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jul 29 14:58:30 old-k8s-version-360866 kubelet[6815]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000c4b7d0, 0xc0002e1ea0)
	Jul 29 14:58:30 old-k8s-version-360866 kubelet[6815]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 29 14:58:30 old-k8s-version-360866 kubelet[6815]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 29 14:58:30 old-k8s-version-360866 kubelet[6815]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 29 14:58:30 old-k8s-version-360866 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 29 14:58:30 old-k8s-version-360866 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 29 14:58:30 old-k8s-version-360866 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 144.
	Jul 29 14:58:30 old-k8s-version-360866 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 29 14:58:30 old-k8s-version-360866 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 29 14:58:30 old-k8s-version-360866 kubelet[6824]: I0729 14:58:30.833882    6824 server.go:416] Version: v1.20.0
	Jul 29 14:58:30 old-k8s-version-360866 kubelet[6824]: I0729 14:58:30.834195    6824 server.go:837] Client rotation is on, will bootstrap in background
	Jul 29 14:58:30 old-k8s-version-360866 kubelet[6824]: I0729 14:58:30.836080    6824 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 29 14:58:30 old-k8s-version-360866 kubelet[6824]: W0729 14:58:30.837292    6824 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 29 14:58:30 old-k8s-version-360866 kubelet[6824]: I0729 14:58:30.837353    6824 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-360866 -n old-k8s-version-360866
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-360866 -n old-k8s-version-360866: exit status 2 (232.751399ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-360866" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (169.83s)

                                                
                                    

Test pass (250/320)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.1
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.3/json-events 4
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.13
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.31.0-beta.0/json-events 4.17
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.14
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.56
31 TestOffline 81.87
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 136.92
40 TestAddons/serial/GCPAuth/Namespaces 1.7
42 TestAddons/parallel/Registry 16.29
44 TestAddons/parallel/InspektorGadget 11.81
46 TestAddons/parallel/HelmTiller 9.82
48 TestAddons/parallel/CSI 58.69
49 TestAddons/parallel/Headlamp 18.89
50 TestAddons/parallel/CloudSpanner 6.6
51 TestAddons/parallel/LocalPath 51.3
52 TestAddons/parallel/NvidiaDevicePlugin 6.48
53 TestAddons/parallel/Yakd 10.72
55 TestCertOptions 45.22
56 TestCertExpiration 263.67
58 TestForceSystemdFlag 57.69
59 TestForceSystemdEnv 45.81
61 TestKVMDriverInstallOrUpdate 1.32
65 TestErrorSpam/setup 40.86
66 TestErrorSpam/start 0.34
67 TestErrorSpam/status 0.72
68 TestErrorSpam/pause 1.51
69 TestErrorSpam/unpause 1.56
70 TestErrorSpam/stop 4.55
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 95.67
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 35.22
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.07
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.13
82 TestFunctional/serial/CacheCmd/cache/add_local 1.01
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
84 TestFunctional/serial/CacheCmd/cache/list 0.04
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.62
87 TestFunctional/serial/CacheCmd/cache/delete 0.1
88 TestFunctional/serial/MinikubeKubectlCmd 0.11
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
90 TestFunctional/serial/ExtraConfig 32.71
91 TestFunctional/serial/ComponentHealth 0.07
92 TestFunctional/serial/LogsCmd 1.51
93 TestFunctional/serial/LogsFileCmd 1.49
94 TestFunctional/serial/InvalidService 4.51
96 TestFunctional/parallel/ConfigCmd 0.35
97 TestFunctional/parallel/DashboardCmd 7.78
98 TestFunctional/parallel/DryRun 0.28
99 TestFunctional/parallel/InternationalLanguage 0.15
100 TestFunctional/parallel/StatusCmd 0.79
104 TestFunctional/parallel/ServiceCmdConnect 8.6
105 TestFunctional/parallel/AddonsCmd 0.12
106 TestFunctional/parallel/PersistentVolumeClaim 30.49
108 TestFunctional/parallel/SSHCmd 0.44
109 TestFunctional/parallel/CpCmd 1.27
110 TestFunctional/parallel/MySQL 20.33
111 TestFunctional/parallel/FileSync 0.24
112 TestFunctional/parallel/CertSync 1.23
116 TestFunctional/parallel/NodeLabels 0.06
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
120 TestFunctional/parallel/License 0.18
121 TestFunctional/parallel/ServiceCmd/DeployApp 22.2
131 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
132 TestFunctional/parallel/MountCmd/any-port 9.52
133 TestFunctional/parallel/ProfileCmd/profile_list 0.27
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
135 TestFunctional/parallel/Version/short 0.05
136 TestFunctional/parallel/Version/components 0.77
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.41
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.48
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.55
142 TestFunctional/parallel/ImageCommands/Setup 0.38
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.21
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.01
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.06
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.71
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.95
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.69
149 TestFunctional/parallel/MountCmd/specific-port 1.62
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.5
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.58
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
155 TestFunctional/parallel/ServiceCmd/List 0.94
156 TestFunctional/parallel/ServiceCmd/JSONOutput 1.29
157 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
158 TestFunctional/parallel/ServiceCmd/Format 0.36
159 TestFunctional/parallel/ServiceCmd/URL 0.38
160 TestFunctional/delete_echo-server_images 0.03
161 TestFunctional/delete_my-image_image 0.01
162 TestFunctional/delete_minikube_cached_images 0.01
166 TestMultiControlPlane/serial/StartCluster 202.8
167 TestMultiControlPlane/serial/DeployApp 5.22
168 TestMultiControlPlane/serial/PingHostFromPods 1.18
169 TestMultiControlPlane/serial/AddWorkerNode 53.25
170 TestMultiControlPlane/serial/NodeLabels 0.07
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.54
172 TestMultiControlPlane/serial/CopyFile 12.81
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.48
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.39
178 TestMultiControlPlane/serial/DeleteSecondaryNode 17.2
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.38
181 TestMultiControlPlane/serial/RestartCluster 382.62
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.39
183 TestMultiControlPlane/serial/AddSecondaryNode 77.24
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.54
188 TestJSONOutput/start/Command 58.55
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.73
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.64
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 7.36
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.19
216 TestMainNoArgs 0.04
217 TestMinikubeProfile 85.23
220 TestMountStart/serial/StartWithMountFirst 26.36
221 TestMountStart/serial/VerifyMountFirst 0.37
222 TestMountStart/serial/StartWithMountSecond 26.42
223 TestMountStart/serial/VerifyMountSecond 0.37
224 TestMountStart/serial/DeleteFirst 0.54
225 TestMountStart/serial/VerifyMountPostDelete 0.37
226 TestMountStart/serial/Stop 1.27
227 TestMountStart/serial/RestartStopped 21.27
228 TestMountStart/serial/VerifyMountPostStop 0.37
231 TestMultiNode/serial/FreshStart2Nodes 116.69
232 TestMultiNode/serial/DeployApp2Nodes 3.64
233 TestMultiNode/serial/PingHostFrom2Pods 0.76
234 TestMultiNode/serial/AddNode 49.11
235 TestMultiNode/serial/MultiNodeLabels 0.06
236 TestMultiNode/serial/ProfileList 0.21
237 TestMultiNode/serial/CopyFile 7.16
238 TestMultiNode/serial/StopNode 2.11
239 TestMultiNode/serial/StartAfterStop 36.74
241 TestMultiNode/serial/DeleteNode 2.08
243 TestMultiNode/serial/RestartMultiNode 181.12
244 TestMultiNode/serial/ValidateNameConflict 44.72
251 TestScheduledStopUnix 109.27
255 TestRunningBinaryUpgrade 174.16
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
264 TestNoKubernetes/serial/StartWithK8s 93.03
269 TestNetworkPlugins/group/false 2.97
273 TestStoppedBinaryUpgrade/Setup 0.6
274 TestStoppedBinaryUpgrade/Upgrade 146.09
275 TestNoKubernetes/serial/StartWithStopK8s 58.99
276 TestNoKubernetes/serial/Start 41.27
277 TestStoppedBinaryUpgrade/MinikubeLogs 1
278 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
279 TestNoKubernetes/serial/ProfileList 1.48
288 TestPause/serial/Start 97.84
289 TestNoKubernetes/serial/Stop 1.28
290 TestNoKubernetes/serial/StartNoArgs 44.94
291 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
293 TestNetworkPlugins/group/auto/Start 79.74
294 TestNetworkPlugins/group/kindnet/Start 107.15
295 TestNetworkPlugins/group/auto/KubeletFlags 0.24
296 TestNetworkPlugins/group/auto/NetCatPod 12.24
297 TestNetworkPlugins/group/auto/DNS 0.18
298 TestNetworkPlugins/group/auto/Localhost 0.17
299 TestNetworkPlugins/group/auto/HairPin 0.16
300 TestNetworkPlugins/group/calico/Start 83.34
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
303 TestNetworkPlugins/group/kindnet/NetCatPod 10.23
304 TestNetworkPlugins/group/custom-flannel/Start 101.58
305 TestNetworkPlugins/group/kindnet/DNS 0.18
306 TestNetworkPlugins/group/kindnet/Localhost 0.15
307 TestNetworkPlugins/group/kindnet/HairPin 0.14
308 TestNetworkPlugins/group/enable-default-cni/Start 84.06
309 TestNetworkPlugins/group/flannel/Start 124.57
310 TestNetworkPlugins/group/calico/ControllerPod 6.01
311 TestNetworkPlugins/group/calico/KubeletFlags 0.23
312 TestNetworkPlugins/group/calico/NetCatPod 12.29
313 TestNetworkPlugins/group/calico/DNS 0.21
314 TestNetworkPlugins/group/calico/Localhost 0.21
315 TestNetworkPlugins/group/calico/HairPin 0.18
316 TestNetworkPlugins/group/bridge/Start 61.98
317 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
318 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.29
319 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
320 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.26
321 TestNetworkPlugins/group/custom-flannel/DNS 0.19
322 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
323 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
324 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
325 TestNetworkPlugins/group/enable-default-cni/Localhost 0.22
326 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
330 TestStartStop/group/no-preload/serial/FirstStart 96.11
331 TestNetworkPlugins/group/flannel/ControllerPod 6.01
332 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
333 TestNetworkPlugins/group/bridge/NetCatPod 14.29
334 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
335 TestNetworkPlugins/group/flannel/NetCatPod 14.35
336 TestNetworkPlugins/group/bridge/DNS 0.15
337 TestNetworkPlugins/group/bridge/Localhost 0.12
338 TestNetworkPlugins/group/bridge/HairPin 0.14
339 TestNetworkPlugins/group/flannel/DNS 0.16
340 TestNetworkPlugins/group/flannel/Localhost 0.13
341 TestNetworkPlugins/group/flannel/HairPin 0.13
343 TestStartStop/group/embed-certs/serial/FirstStart 99.8
345 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 116.86
346 TestStartStop/group/no-preload/serial/DeployApp 9.3
347 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.04
349 TestStartStop/group/embed-certs/serial/DeployApp 8.28
350 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.96
352 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.27
353 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.99
356 TestStartStop/group/no-preload/serial/SecondStart 684.13
360 TestStartStop/group/embed-certs/serial/SecondStart 522.82
362 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 566.61
363 TestStartStop/group/old-k8s-version/serial/Stop 2.29
364 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
375 TestStartStop/group/newest-cni/serial/FirstStart 46.36
376 TestStartStop/group/newest-cni/serial/DeployApp 0
377 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1
378 TestStartStop/group/newest-cni/serial/Stop 11.55
379 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
380 TestStartStop/group/newest-cni/serial/SecondStart 71.01
381 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
382 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
383 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
384 TestStartStop/group/newest-cni/serial/Pause 2.3
x
+
TestDownloadOnly/v1.20.0/json-events (8.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-350833 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-350833 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.095841121s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-350833
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-350833: exit status 85 (59.550501ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-350833 | jenkins | v1.33.1 | 29 Jul 24 13:11 UTC |          |
	|         | -p download-only-350833        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 13:11:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 13:11:53.666354  982058 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:11:53.666603  982058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:11:53.666611  982058 out.go:304] Setting ErrFile to fd 2...
	I0729 13:11:53.666616  982058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:11:53.666823  982058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	W0729 13:11:53.666947  982058 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19338-974764/.minikube/config/config.json: open /home/jenkins/minikube-integration/19338-974764/.minikube/config/config.json: no such file or directory
	I0729 13:11:53.667508  982058 out.go:298] Setting JSON to true
	I0729 13:11:53.668700  982058 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10466,"bootTime":1722248248,"procs":390,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:11:53.668768  982058 start.go:139] virtualization: kvm guest
	I0729 13:11:53.671122  982058 out.go:97] [download-only-350833] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0729 13:11:53.671246  982058 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 13:11:53.671309  982058 notify.go:220] Checking for updates...
	I0729 13:11:53.672582  982058 out.go:169] MINIKUBE_LOCATION=19338
	I0729 13:11:53.673980  982058 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:11:53.675273  982058 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 13:11:53.676556  982058 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 13:11:53.677880  982058 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0729 13:11:53.680094  982058 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 13:11:53.680351  982058 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:11:53.712329  982058 out.go:97] Using the kvm2 driver based on user configuration
	I0729 13:11:53.712356  982058 start.go:297] selected driver: kvm2
	I0729 13:11:53.712363  982058 start.go:901] validating driver "kvm2" against <nil>
	I0729 13:11:53.712813  982058 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:11:53.712920  982058 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19338-974764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 13:11:53.728041  982058 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 13:11:53.728097  982058 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 13:11:53.728634  982058 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0729 13:11:53.728788  982058 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 13:11:53.728816  982058 cni.go:84] Creating CNI manager for ""
	I0729 13:11:53.728827  982058 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 13:11:53.728836  982058 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 13:11:53.728899  982058 start.go:340] cluster config:
	{Name:download-only-350833 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-350833 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:11:53.729069  982058 iso.go:125] acquiring lock: {Name:mk2bc72146110e230952d77b90cad2ea8182c9d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:11:53.730887  982058 out.go:97] Downloading VM boot image ...
	I0729 13:11:53.730916  982058 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19338-974764/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 13:11:56.760119  982058 out.go:97] Starting "download-only-350833" primary control-plane node in "download-only-350833" cluster
	I0729 13:11:56.760142  982058 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 13:11:56.785153  982058 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 13:11:56.785179  982058 cache.go:56] Caching tarball of preloaded images
	I0729 13:11:56.785393  982058 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 13:11:56.787006  982058 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 13:11:56.787026  982058 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 13:11:56.814752  982058 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19338-974764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-350833 host does not exist
	  To start a cluster, run: "minikube start -p download-only-350833"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-350833
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-336101 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-336101 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.002879707s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (4.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-336101
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-336101: exit status 85 (59.476018ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-350833 | jenkins | v1.33.1 | 29 Jul 24 13:11 UTC |                     |
	|         | -p download-only-350833        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Jul 24 13:12 UTC | 29 Jul 24 13:12 UTC |
	| delete  | -p download-only-350833        | download-only-350833 | jenkins | v1.33.1 | 29 Jul 24 13:12 UTC | 29 Jul 24 13:12 UTC |
	| start   | -o=json --download-only        | download-only-336101 | jenkins | v1.33.1 | 29 Jul 24 13:12 UTC |                     |
	|         | -p download-only-336101        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 13:12:02
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 13:12:02.084247  982262 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:12:02.084379  982262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:12:02.084389  982262 out.go:304] Setting ErrFile to fd 2...
	I0729 13:12:02.084393  982262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:12:02.084596  982262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 13:12:02.085154  982262 out.go:298] Setting JSON to true
	I0729 13:12:02.086264  982262 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10474,"bootTime":1722248248,"procs":388,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:12:02.086326  982262 start.go:139] virtualization: kvm guest
	I0729 13:12:02.088295  982262 out.go:97] [download-only-336101] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:12:02.088469  982262 notify.go:220] Checking for updates...
	I0729 13:12:02.089700  982262 out.go:169] MINIKUBE_LOCATION=19338
	I0729 13:12:02.091179  982262 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:12:02.092512  982262 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 13:12:02.093939  982262 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 13:12:02.095084  982262 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-336101 host does not exist
	  To start a cluster, run: "minikube start -p download-only-336101"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-336101
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (4.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-960541 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-960541 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.166783906s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (4.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-960541
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-960541: exit status 85 (59.60666ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-350833 | jenkins | v1.33.1 | 29 Jul 24 13:11 UTC |                     |
	|         | -p download-only-350833             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 13:12 UTC | 29 Jul 24 13:12 UTC |
	| delete  | -p download-only-350833             | download-only-350833 | jenkins | v1.33.1 | 29 Jul 24 13:12 UTC | 29 Jul 24 13:12 UTC |
	| start   | -o=json --download-only             | download-only-336101 | jenkins | v1.33.1 | 29 Jul 24 13:12 UTC |                     |
	|         | -p download-only-336101             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 13:12 UTC | 29 Jul 24 13:12 UTC |
	| delete  | -p download-only-336101             | download-only-336101 | jenkins | v1.33.1 | 29 Jul 24 13:12 UTC | 29 Jul 24 13:12 UTC |
	| start   | -o=json --download-only             | download-only-960541 | jenkins | v1.33.1 | 29 Jul 24 13:12 UTC |                     |
	|         | -p download-only-960541             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 13:12:06
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 13:12:06.403890  982453 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:12:06.404133  982453 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:12:06.404143  982453 out.go:304] Setting ErrFile to fd 2...
	I0729 13:12:06.404147  982453 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:12:06.404328  982453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 13:12:06.404889  982453 out.go:298] Setting JSON to true
	I0729 13:12:06.405967  982453 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10478,"bootTime":1722248248,"procs":388,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:12:06.406026  982453 start.go:139] virtualization: kvm guest
	I0729 13:12:06.408014  982453 out.go:97] [download-only-960541] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:12:06.408172  982453 notify.go:220] Checking for updates...
	I0729 13:12:06.409616  982453 out.go:169] MINIKUBE_LOCATION=19338
	I0729 13:12:06.410935  982453 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:12:06.412235  982453 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 13:12:06.413393  982453 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 13:12:06.414560  982453 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-960541 host does not exist
	  To start a cluster, run: "minikube start -p download-only-960541"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-960541
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-696925 --alsologtostderr --binary-mirror http://127.0.0.1:33915 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-696925" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-696925
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (81.87s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-715623 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-715623 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m21.128184414s)
helpers_test.go:175: Cleaning up "offline-crio-715623" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-715623
--- PASS: TestOffline (81.87s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-881745
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-881745: exit status 85 (47.015726ms)

                                                
                                                
-- stdout --
	* Profile "addons-881745" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-881745"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-881745
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-881745: exit status 85 (47.708313ms)

                                                
                                                
-- stdout --
	* Profile "addons-881745" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-881745"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (136.92s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-881745 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-881745 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m16.921566117s)
--- PASS: TestAddons/Setup (136.92s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (1.7s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-881745 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-881745 get secret gcp-auth -n new-namespace
addons_test.go:670: (dbg) Non-zero exit: kubectl --context addons-881745 get secret gcp-auth -n new-namespace: exit status 1 (82.165143ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:662: (dbg) Run:  kubectl --context addons-881745 logs -l app=gcp-auth -n gcp-auth
addons_test.go:670: (dbg) Run:  kubectl --context addons-881745 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (1.70s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.651373ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-z2p5s" [96173b90-f986-42c3-8dab-68759432df0d] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004602595s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ljt48" [28156caa-8805-4e7d-a425-0e65cdbb245b] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.010970781s
addons_test.go:342: (dbg) Run:  kubectl --context addons-881745 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-881745 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-881745 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.013619762s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-881745 ip
2024/07/29 13:15:14 [DEBUG] GET http://192.168.39.103:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-881745 addons disable registry --alsologtostderr -v=1
addons_test.go:390: (dbg) Done: out/minikube-linux-amd64 -p addons-881745 addons disable registry --alsologtostderr -v=1: (1.098066649s)
--- PASS: TestAddons/parallel/Registry (16.29s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-78x6w" [b51dad19-9354-4d16-ac5a-41dc8dd8ef7e] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004664649s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-881745
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-881745: (5.809181231s)
--- PASS: TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.82s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.502883ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-h58h7" [37213271-b4d7-4a89-bd83-34aacc2ec941] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005022425s
addons_test.go:475: (dbg) Run:  kubectl --context addons-881745 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-881745 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.243150805s)
addons_test.go:480: kubectl --context addons-881745 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: error stream protocol error: unknown error
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-881745 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.82s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 8.690808ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-881745 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-881745 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [47c449f6-a052-473f-ba3d-f2cac7c02571] Pending
helpers_test.go:344: "task-pv-pod" [47c449f6-a052-473f-ba3d-f2cac7c02571] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [47c449f6-a052-473f-ba3d-f2cac7c02571] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.005758941s
addons_test.go:590: (dbg) Run:  kubectl --context addons-881745 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-881745 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-881745 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-881745 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-881745 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-881745 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-881745 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b126350a-4822-433c-9437-09e1a561bceb] Pending
helpers_test.go:344: "task-pv-pod-restore" [b126350a-4822-433c-9437-09e1a561bceb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b126350a-4822-433c-9437-09e1a561bceb] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004305133s
addons_test.go:632: (dbg) Run:  kubectl --context addons-881745 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-881745 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-881745 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-881745 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-881745 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.847229036s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-881745 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (58.69s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-881745 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-881745 --alsologtostderr -v=1: (1.216067358s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-khf7v" [b963a65b-dc59-4f8b-b26f-be8d7218859a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-khf7v" [b963a65b-dc59-4f8b-b26f-be8d7218859a] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004058265s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-881745 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-881745 addons disable headlamp --alsologtostderr -v=1: (5.672855476s)
--- PASS: TestAddons/parallel/Headlamp (18.89s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-dwmk6" [fe3153fe-c283-4d7e-89dc-a02322e56676] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.006289409s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-881745
--- PASS: TestAddons/parallel/CloudSpanner (6.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.3s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-881745 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-881745 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881745 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b5a42854-0287-4703-abe2-37a4fa5b70cf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b5a42854-0287-4703-abe2-37a4fa5b70cf] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b5a42854-0287-4703-abe2-37a4fa5b70cf] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003766446s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-881745 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-881745 ssh "cat /opt/local-path-provisioner/pvc-2f86f84c-0179-4267-9abb-37e36ba02c83_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-881745 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-881745 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-881745 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-881745 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.488296019s)
--- PASS: TestAddons/parallel/LocalPath (51.30s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2mgsg" [e736085a-8a65-4ef9-a69b-d309fa46e0b7] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004534262s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-881745
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.48s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-5hxz5" [33af7341-1cbe-4836-ab08-c448d7d4c3c8] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.007365703s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-881745 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-881745 addons disable yakd --alsologtostderr -v=1: (5.70820054s)
--- PASS: TestAddons/parallel/Yakd (10.72s)

                                                
                                    
x
+
TestCertOptions (45.22s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-442776 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-442776 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (44.063261063s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-442776 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-442776 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-442776 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-442776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-442776
--- PASS: TestCertOptions (45.22s)

                                                
                                    
x
+
TestCertExpiration (263.67s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-869983 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-869983 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (42.869319126s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-869983 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-869983 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (39.880200236s)
helpers_test.go:175: Cleaning up "cert-expiration-869983" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-869983
--- PASS: TestCertExpiration (263.67s)

                                                
                                    
x
+
TestForceSystemdFlag (57.69s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-956245 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-956245 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (56.631855372s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-956245 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-956245" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-956245
--- PASS: TestForceSystemdFlag (57.69s)

                                                
                                    
x
+
TestForceSystemdEnv (45.81s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-764732 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-764732 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (44.921146598s)
helpers_test.go:175: Cleaning up "force-systemd-env-764732" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-764732
--- PASS: TestForceSystemdEnv (45.81s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.32s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.32s)

                                                
                                    
x
+
TestErrorSpam/setup (40.86s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-144411 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-144411 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-144411 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-144411 --driver=kvm2  --container-runtime=crio: (40.864378757s)
--- PASS: TestErrorSpam/setup (40.86s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-144411 --log_dir /tmp/nospam-144411 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-144411 --log_dir /tmp/nospam-144411 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-144411 --log_dir /tmp/nospam-144411 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-144411 --log_dir /tmp/nospam-144411 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-144411 --log_dir /tmp/nospam-144411 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-144411 --log_dir /tmp/nospam-144411 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-144411 --log_dir /tmp/nospam-144411 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-144411 --log_dir /tmp/nospam-144411 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-144411 --log_dir /tmp/nospam-144411 pause
--- PASS: TestErrorSpam/pause (1.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-144411 --log_dir /tmp/nospam-144411 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-144411 --log_dir /tmp/nospam-144411 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-144411 --log_dir /tmp/nospam-144411 unpause
--- PASS: TestErrorSpam/unpause (1.56s)

                                                
                                    
x
+
TestErrorSpam/stop (4.55s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-144411 --log_dir /tmp/nospam-144411 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-144411 --log_dir /tmp/nospam-144411 stop: (1.468722087s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-144411 --log_dir /tmp/nospam-144411 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-144411 --log_dir /tmp/nospam-144411 stop: (2.027761329s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-144411 --log_dir /tmp/nospam-144411 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-144411 --log_dir /tmp/nospam-144411 stop: (1.05546328s)
--- PASS: TestErrorSpam/stop (4.55s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19338-974764/.minikube/files/etc/test/nested/copy/982046/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (95.67s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-669544 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0729 13:24:30.664574  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
E0729 13:24:30.670398  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
E0729 13:24:30.680645  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
E0729 13:24:30.700911  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
E0729 13:24:30.741197  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
E0729 13:24:30.821625  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
E0729 13:24:30.982060  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
E0729 13:24:31.302415  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
E0729 13:24:31.943232  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
E0729 13:24:33.223967  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
E0729 13:24:35.785739  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
E0729 13:24:40.906719  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
E0729 13:24:51.147174  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
E0729 13:25:11.628240  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-669544 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m35.672298361s)
--- PASS: TestFunctional/serial/StartWithProxy (95.67s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.22s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-669544 --alsologtostderr -v=8
E0729 13:25:52.588929  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-669544 --alsologtostderr -v=8: (35.222430006s)
functional_test.go:659: soft start took 35.223082338s for "functional-669544" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.22s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-669544 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-669544 cache add registry.k8s.io/pause:3.1: (1.045226856s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-669544 cache add registry.k8s.io/pause:3.3: (1.080727774s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-669544 cache add registry.k8s.io/pause:latest: (1.004271714s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-669544 /tmp/TestFunctionalserialCacheCmdcacheadd_local3242181817/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 cache add minikube-local-cache-test:functional-669544
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 cache delete minikube-local-cache-test:functional-669544
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-669544
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-669544 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (206.933667ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 kubectl -- --context functional-669544 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-669544 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.71s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-669544 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-669544 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.713299381s)
functional_test.go:757: restart took 32.713455396s for "functional-669544" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.71s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-669544 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-669544 logs: (1.508026233s)
--- PASS: TestFunctional/serial/LogsCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 logs --file /tmp/TestFunctionalserialLogsFileCmd1363094323/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-669544 logs --file /tmp/TestFunctionalserialLogsFileCmd1363094323/001/logs.txt: (1.490053156s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.51s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-669544 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-669544
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-669544: exit status 115 (288.652086ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.142:31242 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-669544 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-669544 delete -f testdata/invalidsvc.yaml: (1.028194309s)
--- PASS: TestFunctional/serial/InvalidService (4.51s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-669544 config get cpus: exit status 14 (65.696794ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-669544 config get cpus: exit status 14 (51.891468ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-669544 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-669544 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 992145: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.78s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-669544 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-669544 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (136.458466ms)

                                                
                                                
-- stdout --
	* [functional-669544] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19338
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:27:20.216585  991781 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:27:20.216708  991781 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:27:20.216717  991781 out.go:304] Setting ErrFile to fd 2...
	I0729 13:27:20.216721  991781 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:27:20.216933  991781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 13:27:20.217426  991781 out.go:298] Setting JSON to false
	I0729 13:27:20.218447  991781 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11392,"bootTime":1722248248,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:27:20.218508  991781 start.go:139] virtualization: kvm guest
	I0729 13:27:20.220921  991781 out.go:177] * [functional-669544] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:27:20.222394  991781 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 13:27:20.222476  991781 notify.go:220] Checking for updates...
	I0729 13:27:20.225127  991781 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:27:20.226487  991781 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 13:27:20.227848  991781 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 13:27:20.229284  991781 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:27:20.230556  991781 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:27:20.232166  991781 config.go:182] Loaded profile config "functional-669544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:27:20.232678  991781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:27:20.232754  991781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:27:20.247381  991781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36951
	I0729 13:27:20.247815  991781 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:27:20.248333  991781 main.go:141] libmachine: Using API Version  1
	I0729 13:27:20.248357  991781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:27:20.248697  991781 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:27:20.248889  991781 main.go:141] libmachine: (functional-669544) Calling .DriverName
	I0729 13:27:20.249223  991781 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:27:20.249559  991781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:27:20.249597  991781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:27:20.264300  991781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35279
	I0729 13:27:20.264732  991781 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:27:20.265211  991781 main.go:141] libmachine: Using API Version  1
	I0729 13:27:20.265230  991781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:27:20.265575  991781 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:27:20.265764  991781 main.go:141] libmachine: (functional-669544) Calling .DriverName
	I0729 13:27:20.300669  991781 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 13:27:20.302204  991781 start.go:297] selected driver: kvm2
	I0729 13:27:20.302219  991781 start.go:901] validating driver "kvm2" against &{Name:functional-669544 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-669544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.142 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:27:20.302352  991781 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:27:20.304611  991781 out.go:177] 
	W0729 13:27:20.305914  991781 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0729 13:27:20.307093  991781 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-669544 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-669544 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-669544 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (148.688313ms)

                                                
                                                
-- stdout --
	* [functional-669544] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19338
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:27:20.463224  991865 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:27:20.463341  991865 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:27:20.463352  991865 out.go:304] Setting ErrFile to fd 2...
	I0729 13:27:20.463356  991865 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:27:20.463646  991865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 13:27:20.464182  991865 out.go:298] Setting JSON to false
	I0729 13:27:20.465403  991865 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11392,"bootTime":1722248248,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:27:20.465467  991865 start.go:139] virtualization: kvm guest
	I0729 13:27:20.467277  991865 out.go:177] * [functional-669544] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0729 13:27:20.468964  991865 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 13:27:20.468989  991865 notify.go:220] Checking for updates...
	I0729 13:27:20.471291  991865 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:27:20.472423  991865 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 13:27:20.473632  991865 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 13:27:20.475000  991865 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:27:20.476133  991865 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:27:20.477883  991865 config.go:182] Loaded profile config "functional-669544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:27:20.478460  991865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:27:20.478543  991865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:27:20.494281  991865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35655
	I0729 13:27:20.494825  991865 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:27:20.495394  991865 main.go:141] libmachine: Using API Version  1
	I0729 13:27:20.495421  991865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:27:20.495798  991865 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:27:20.495973  991865 main.go:141] libmachine: (functional-669544) Calling .DriverName
	I0729 13:27:20.496280  991865 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:27:20.496745  991865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:27:20.496797  991865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:27:20.512482  991865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33987
	I0729 13:27:20.512918  991865 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:27:20.513505  991865 main.go:141] libmachine: Using API Version  1
	I0729 13:27:20.513538  991865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:27:20.513931  991865 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:27:20.514151  991865 main.go:141] libmachine: (functional-669544) Calling .DriverName
	I0729 13:27:20.555105  991865 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0729 13:27:20.556432  991865 start.go:297] selected driver: kvm2
	I0729 13:27:20.556453  991865 start.go:901] validating driver "kvm2" against &{Name:functional-669544 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-669544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.142 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:27:20.556635  991865 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:27:20.559164  991865 out.go:177] 
	W0729 13:27:20.560632  991865 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0729 13:27:20.562263  991865 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-669544 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-669544 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-hrr5r" [0db2f01e-7144-461b-982b-cd2d670d98dc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-hrr5r" [0db2f01e-7144-461b-982b-cd2d670d98dc] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.00611697s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.50.142:31096
functional_test.go:1671: http://192.168.50.142:31096: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-hrr5r

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.142:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.142:31096
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6a320ad8-886d-415d-a43c-c77ef38091a5] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005142693s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-669544 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-669544 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-669544 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-669544 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [289c8924-7e34-4bc2-8cc4-b525228ce845] Pending
helpers_test.go:344: "sp-pod" [289c8924-7e34-4bc2-8cc4-b525228ce845] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0729 13:27:14.509624  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [289c8924-7e34-4bc2-8cc4-b525228ce845] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.429299565s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-669544 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-669544 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-669544 delete -f testdata/storage-provisioner/pod.yaml: (2.126829178s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-669544 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f2222051-0ea8-4fef-b0b8-80ff62e51acb] Pending
2024/07/29 13:27:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "sp-pod" [f2222051-0ea8-4fef-b0b8-80ff62e51acb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f2222051-0ea8-4fef-b0b8-80ff62e51acb] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003532846s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-669544 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.49s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh -n functional-669544 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 cp functional-669544:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd540358899/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh -n functional-669544 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh -n functional-669544 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-669544 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-szjf7" [f23e0451-0753-4fc0-a800-bd123a30f687] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-szjf7" [f23e0451-0753-4fc0-a800-bd123a30f687] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.004872554s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-669544 exec mysql-64454c8b5c-szjf7 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.33s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/982046/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh "sudo cat /etc/test/nested/copy/982046/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/982046.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh "sudo cat /etc/ssl/certs/982046.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/982046.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh "sudo cat /usr/share/ca-certificates/982046.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/9820462.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh "sudo cat /etc/ssl/certs/9820462.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/9820462.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh "sudo cat /usr/share/ca-certificates/9820462.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-669544 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-669544 ssh "sudo systemctl is-active docker": exit status 1 (265.78605ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-669544 ssh "sudo systemctl is-active containerd": exit status 1 (253.388013ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (22.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-669544 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-669544 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-lkbqt" [a2b746ce-6bd0-40af-965a-306b6e5b28a2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-lkbqt" [a2b746ce-6bd0-40af-965a-306b6e5b28a2] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 22.004490225s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (22.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-669544 /tmp/TestFunctionalparallelMountCmdany-port2544327913/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722259627772597190" to /tmp/TestFunctionalparallelMountCmdany-port2544327913/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722259627772597190" to /tmp/TestFunctionalparallelMountCmdany-port2544327913/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722259627772597190" to /tmp/TestFunctionalparallelMountCmdany-port2544327913/001/test-1722259627772597190
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-669544 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (229.897368ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 29 13:27 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 29 13:27 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 29 13:27 test-1722259627772597190
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh cat /mount-9p/test-1722259627772597190
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-669544 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [360b59a0-ae94-4835-b4b7-f1d526f736fd] Pending
helpers_test.go:344: "busybox-mount" [360b59a0-ae94-4835-b4b7-f1d526f736fd] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [360b59a0-ae94-4835-b4b7-f1d526f736fd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [360b59a0-ae94-4835-b4b7-f1d526f736fd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.003686908s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-669544 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-669544 /tmp/TestFunctionalparallelMountCmdany-port2544327913/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "220.07711ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "51.829624ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "277.406728ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "49.602017ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-669544 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-669544
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-669544
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-669544 image ls --format short --alsologtostderr:
I0729 13:27:30.447602  992373 out.go:291] Setting OutFile to fd 1 ...
I0729 13:27:30.447732  992373 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 13:27:30.447744  992373 out.go:304] Setting ErrFile to fd 2...
I0729 13:27:30.447751  992373 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 13:27:30.447980  992373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
I0729 13:27:30.448632  992373 config.go:182] Loaded profile config "functional-669544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 13:27:30.448738  992373 config.go:182] Loaded profile config "functional-669544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 13:27:30.449126  992373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 13:27:30.449162  992373 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 13:27:30.465066  992373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38185
I0729 13:27:30.465656  992373 main.go:141] libmachine: () Calling .GetVersion
I0729 13:27:30.466324  992373 main.go:141] libmachine: Using API Version  1
I0729 13:27:30.466357  992373 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 13:27:30.466718  992373 main.go:141] libmachine: () Calling .GetMachineName
I0729 13:27:30.466924  992373 main.go:141] libmachine: (functional-669544) Calling .GetState
I0729 13:27:30.468856  992373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 13:27:30.468898  992373 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 13:27:30.484147  992373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43731
I0729 13:27:30.484690  992373 main.go:141] libmachine: () Calling .GetVersion
I0729 13:27:30.485245  992373 main.go:141] libmachine: Using API Version  1
I0729 13:27:30.485272  992373 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 13:27:30.485564  992373 main.go:141] libmachine: () Calling .GetMachineName
I0729 13:27:30.485765  992373 main.go:141] libmachine: (functional-669544) Calling .DriverName
I0729 13:27:30.485996  992373 ssh_runner.go:195] Run: systemctl --version
I0729 13:27:30.486025  992373 main.go:141] libmachine: (functional-669544) Calling .GetSSHHostname
I0729 13:27:30.488757  992373 main.go:141] libmachine: (functional-669544) DBG | domain functional-669544 has defined MAC address 52:54:00:45:d2:6f in network mk-functional-669544
I0729 13:27:30.489151  992373 main.go:141] libmachine: (functional-669544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:d2:6f", ip: ""} in network mk-functional-669544: {Iface:virbr1 ExpiryTime:2024-07-29 14:24:22 +0000 UTC Type:0 Mac:52:54:00:45:d2:6f Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:functional-669544 Clientid:01:52:54:00:45:d2:6f}
I0729 13:27:30.489179  992373 main.go:141] libmachine: (functional-669544) DBG | domain functional-669544 has defined IP address 192.168.50.142 and MAC address 52:54:00:45:d2:6f in network mk-functional-669544
I0729 13:27:30.489311  992373 main.go:141] libmachine: (functional-669544) Calling .GetSSHPort
I0729 13:27:30.489479  992373 main.go:141] libmachine: (functional-669544) Calling .GetSSHKeyPath
I0729 13:27:30.489645  992373 main.go:141] libmachine: (functional-669544) Calling .GetSSHUsername
I0729 13:27:30.489790  992373 sshutil.go:53] new ssh client: &{IP:192.168.50.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/functional-669544/id_rsa Username:docker}
I0729 13:27:30.578952  992373 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 13:27:30.616404  992373 main.go:141] libmachine: Making call to close driver server
I0729 13:27:30.616440  992373 main.go:141] libmachine: (functional-669544) Calling .Close
I0729 13:27:30.616722  992373 main.go:141] libmachine: Successfully made call to close driver server
I0729 13:27:30.616744  992373 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 13:27:30.616748  992373 main.go:141] libmachine: (functional-669544) DBG | Closing plugin on server side
I0729 13:27:30.616764  992373 main.go:141] libmachine: Making call to close driver server
I0729 13:27:30.616774  992373 main.go:141] libmachine: (functional-669544) Calling .Close
I0729 13:27:30.616980  992373 main.go:141] libmachine: (functional-669544) DBG | Closing plugin on server side
I0729 13:27:30.616997  992373 main.go:141] libmachine: Successfully made call to close driver server
I0729 13:27:30.617006  992373 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-669544 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | latest             | a72860cb95fd5 | 192MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| docker.io/kicbase/echo-server           | functional-669544  | 9056ab77afb8e | 4.94MB |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-669544  | 0b95aebd61c3c | 3.33kB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-669544 image ls --format table --alsologtostderr:
I0729 13:27:32.619288  992618 out.go:291] Setting OutFile to fd 1 ...
I0729 13:27:32.619453  992618 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 13:27:32.619465  992618 out.go:304] Setting ErrFile to fd 2...
I0729 13:27:32.619471  992618 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 13:27:32.619718  992618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
I0729 13:27:32.620653  992618 config.go:182] Loaded profile config "functional-669544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 13:27:32.620831  992618 config.go:182] Loaded profile config "functional-669544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 13:27:32.621449  992618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 13:27:32.621513  992618 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 13:27:32.637109  992618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39797
I0729 13:27:32.637598  992618 main.go:141] libmachine: () Calling .GetVersion
I0729 13:27:32.638314  992618 main.go:141] libmachine: Using API Version  1
I0729 13:27:32.638349  992618 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 13:27:32.638787  992618 main.go:141] libmachine: () Calling .GetMachineName
I0729 13:27:32.639023  992618 main.go:141] libmachine: (functional-669544) Calling .GetState
I0729 13:27:32.641106  992618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 13:27:32.641156  992618 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 13:27:32.656593  992618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44171
I0729 13:27:32.657088  992618 main.go:141] libmachine: () Calling .GetVersion
I0729 13:27:32.657684  992618 main.go:141] libmachine: Using API Version  1
I0729 13:27:32.657716  992618 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 13:27:32.658083  992618 main.go:141] libmachine: () Calling .GetMachineName
I0729 13:27:32.658307  992618 main.go:141] libmachine: (functional-669544) Calling .DriverName
I0729 13:27:32.658523  992618 ssh_runner.go:195] Run: systemctl --version
I0729 13:27:32.658550  992618 main.go:141] libmachine: (functional-669544) Calling .GetSSHHostname
I0729 13:27:32.661535  992618 main.go:141] libmachine: (functional-669544) DBG | domain functional-669544 has defined MAC address 52:54:00:45:d2:6f in network mk-functional-669544
I0729 13:27:32.661945  992618 main.go:141] libmachine: (functional-669544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:d2:6f", ip: ""} in network mk-functional-669544: {Iface:virbr1 ExpiryTime:2024-07-29 14:24:22 +0000 UTC Type:0 Mac:52:54:00:45:d2:6f Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:functional-669544 Clientid:01:52:54:00:45:d2:6f}
I0729 13:27:32.661980  992618 main.go:141] libmachine: (functional-669544) DBG | domain functional-669544 has defined IP address 192.168.50.142 and MAC address 52:54:00:45:d2:6f in network mk-functional-669544
I0729 13:27:32.662212  992618 main.go:141] libmachine: (functional-669544) Calling .GetSSHPort
I0729 13:27:32.662383  992618 main.go:141] libmachine: (functional-669544) Calling .GetSSHKeyPath
I0729 13:27:32.662535  992618 main.go:141] libmachine: (functional-669544) Calling .GetSSHUsername
I0729 13:27:32.662701  992618 sshutil.go:53] new ssh client: &{IP:192.168.50.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/functional-669544/id_rsa Username:docker}
I0729 13:27:32.795128  992618 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 13:27:32.965108  992618 main.go:141] libmachine: Making call to close driver server
I0729 13:27:32.965131  992618 main.go:141] libmachine: (functional-669544) Calling .Close
I0729 13:27:32.965441  992618 main.go:141] libmachine: Successfully made call to close driver server
I0729 13:27:32.965463  992618 main.go:141] libmachine: (functional-669544) DBG | Closing plugin on server side
I0729 13:27:32.965471  992618 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 13:27:32.965486  992618 main.go:141] libmachine: Making call to close driver server
I0729 13:27:32.965498  992618 main.go:141] libmachine: (functional-669544) Calling .Close
I0729 13:27:32.965705  992618 main.go:141] libmachine: Successfully made call to close driver server
I0729 13:27:32.965718  992618 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-669544 image ls --format json --alsologtostderr:
[{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16
b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-669544"],"size":"4943877"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8
da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha25
6:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"0184c1613d92931126feb4c548e5da
11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"0b95aebd61c3cf988c1d288a5434481837ab91b7ca3c6c0bc7e6f174378c30de","re
poDigests":["localhost/minikube-local-cache-test@sha256:7e943217d92009d2e425d88bd1f04a0325d8082c75e99d952520fe6fa436803b"],"repoTags":["localhost/minikube-local-cache-test:functional-669544"],"size":"3330"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":["docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c","docker.io/library/nginx@
sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-669544 image ls --format json --alsologtostderr:
I0729 13:27:32.134013  992595 out.go:291] Setting OutFile to fd 1 ...
I0729 13:27:32.134139  992595 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 13:27:32.134147  992595 out.go:304] Setting ErrFile to fd 2...
I0729 13:27:32.134151  992595 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 13:27:32.134341  992595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
I0729 13:27:32.134928  992595 config.go:182] Loaded profile config "functional-669544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 13:27:32.135024  992595 config.go:182] Loaded profile config "functional-669544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 13:27:32.135393  992595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 13:27:32.135439  992595 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 13:27:32.152963  992595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45131
I0729 13:27:32.153441  992595 main.go:141] libmachine: () Calling .GetVersion
I0729 13:27:32.154035  992595 main.go:141] libmachine: Using API Version  1
I0729 13:27:32.154072  992595 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 13:27:32.154376  992595 main.go:141] libmachine: () Calling .GetMachineName
I0729 13:27:32.154591  992595 main.go:141] libmachine: (functional-669544) Calling .GetState
I0729 13:27:32.156507  992595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 13:27:32.156559  992595 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 13:27:32.171616  992595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36695
I0729 13:27:32.172074  992595 main.go:141] libmachine: () Calling .GetVersion
I0729 13:27:32.172653  992595 main.go:141] libmachine: Using API Version  1
I0729 13:27:32.172680  992595 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 13:27:32.173012  992595 main.go:141] libmachine: () Calling .GetMachineName
I0729 13:27:32.173198  992595 main.go:141] libmachine: (functional-669544) Calling .DriverName
I0729 13:27:32.173429  992595 ssh_runner.go:195] Run: systemctl --version
I0729 13:27:32.173470  992595 main.go:141] libmachine: (functional-669544) Calling .GetSSHHostname
I0729 13:27:32.176502  992595 main.go:141] libmachine: (functional-669544) DBG | domain functional-669544 has defined MAC address 52:54:00:45:d2:6f in network mk-functional-669544
I0729 13:27:32.176957  992595 main.go:141] libmachine: (functional-669544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:d2:6f", ip: ""} in network mk-functional-669544: {Iface:virbr1 ExpiryTime:2024-07-29 14:24:22 +0000 UTC Type:0 Mac:52:54:00:45:d2:6f Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:functional-669544 Clientid:01:52:54:00:45:d2:6f}
I0729 13:27:32.176990  992595 main.go:141] libmachine: (functional-669544) DBG | domain functional-669544 has defined IP address 192.168.50.142 and MAC address 52:54:00:45:d2:6f in network mk-functional-669544
I0729 13:27:32.177109  992595 main.go:141] libmachine: (functional-669544) Calling .GetSSHPort
I0729 13:27:32.177314  992595 main.go:141] libmachine: (functional-669544) Calling .GetSSHKeyPath
I0729 13:27:32.177480  992595 main.go:141] libmachine: (functional-669544) Calling .GetSSHUsername
I0729 13:27:32.177638  992595 sshutil.go:53] new ssh client: &{IP:192.168.50.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/functional-669544/id_rsa Username:docker}
I0729 13:27:32.312859  992595 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 13:27:32.559485  992595 main.go:141] libmachine: Making call to close driver server
I0729 13:27:32.559499  992595 main.go:141] libmachine: (functional-669544) Calling .Close
I0729 13:27:32.559882  992595 main.go:141] libmachine: Successfully made call to close driver server
I0729 13:27:32.559902  992595 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 13:27:32.559928  992595 main.go:141] libmachine: Making call to close driver server
I0729 13:27:32.559936  992595 main.go:141] libmachine: (functional-669544) Calling .Close
I0729 13:27:32.560208  992595 main.go:141] libmachine: Successfully made call to close driver server
I0729 13:27:32.560234  992595 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-669544 image ls --format yaml --alsologtostderr:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 0b95aebd61c3cf988c1d288a5434481837ab91b7ca3c6c0bc7e6f174378c30de
repoDigests:
- localhost/minikube-local-cache-test@sha256:7e943217d92009d2e425d88bd1f04a0325d8082c75e99d952520fe6fa436803b
repoTags:
- localhost/minikube-local-cache-test:functional-669544
size: "3330"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests:
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
- docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-669544
size: "4943877"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-669544 image ls --format yaml --alsologtostderr:
I0729 13:27:30.664287  992412 out.go:291] Setting OutFile to fd 1 ...
I0729 13:27:30.664395  992412 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 13:27:30.664422  992412 out.go:304] Setting ErrFile to fd 2...
I0729 13:27:30.664427  992412 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 13:27:30.664591  992412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
I0729 13:27:30.665138  992412 config.go:182] Loaded profile config "functional-669544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 13:27:30.665235  992412 config.go:182] Loaded profile config "functional-669544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 13:27:30.665580  992412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 13:27:30.665622  992412 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 13:27:30.680451  992412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44063
I0729 13:27:30.680921  992412 main.go:141] libmachine: () Calling .GetVersion
I0729 13:27:30.681499  992412 main.go:141] libmachine: Using API Version  1
I0729 13:27:30.681531  992412 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 13:27:30.681837  992412 main.go:141] libmachine: () Calling .GetMachineName
I0729 13:27:30.682045  992412 main.go:141] libmachine: (functional-669544) Calling .GetState
I0729 13:27:30.683796  992412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 13:27:30.683838  992412 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 13:27:30.698796  992412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37465
I0729 13:27:30.699194  992412 main.go:141] libmachine: () Calling .GetVersion
I0729 13:27:30.699657  992412 main.go:141] libmachine: Using API Version  1
I0729 13:27:30.699682  992412 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 13:27:30.699971  992412 main.go:141] libmachine: () Calling .GetMachineName
I0729 13:27:30.700131  992412 main.go:141] libmachine: (functional-669544) Calling .DriverName
I0729 13:27:30.700366  992412 ssh_runner.go:195] Run: systemctl --version
I0729 13:27:30.700395  992412 main.go:141] libmachine: (functional-669544) Calling .GetSSHHostname
I0729 13:27:30.702771  992412 main.go:141] libmachine: (functional-669544) DBG | domain functional-669544 has defined MAC address 52:54:00:45:d2:6f in network mk-functional-669544
I0729 13:27:30.703117  992412 main.go:141] libmachine: (functional-669544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:d2:6f", ip: ""} in network mk-functional-669544: {Iface:virbr1 ExpiryTime:2024-07-29 14:24:22 +0000 UTC Type:0 Mac:52:54:00:45:d2:6f Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:functional-669544 Clientid:01:52:54:00:45:d2:6f}
I0729 13:27:30.703143  992412 main.go:141] libmachine: (functional-669544) DBG | domain functional-669544 has defined IP address 192.168.50.142 and MAC address 52:54:00:45:d2:6f in network mk-functional-669544
I0729 13:27:30.703243  992412 main.go:141] libmachine: (functional-669544) Calling .GetSSHPort
I0729 13:27:30.703422  992412 main.go:141] libmachine: (functional-669544) Calling .GetSSHKeyPath
I0729 13:27:30.703663  992412 main.go:141] libmachine: (functional-669544) Calling .GetSSHUsername
I0729 13:27:30.703839  992412 sshutil.go:53] new ssh client: &{IP:192.168.50.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/functional-669544/id_rsa Username:docker}
I0729 13:27:30.822969  992412 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 13:27:30.932399  992412 main.go:141] libmachine: Making call to close driver server
I0729 13:27:30.932435  992412 main.go:141] libmachine: (functional-669544) Calling .Close
I0729 13:27:30.932781  992412 main.go:141] libmachine: Successfully made call to close driver server
I0729 13:27:30.932802  992412 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 13:27:30.932818  992412 main.go:141] libmachine: Making call to close driver server
I0729 13:27:30.932826  992412 main.go:141] libmachine: (functional-669544) Calling .Close
I0729 13:27:30.933087  992412 main.go:141] libmachine: Successfully made call to close driver server
I0729 13:27:30.933104  992412 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-669544 ssh pgrep buildkitd: exit status 1 (250.516067ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 image build -t localhost/my-image:functional-669544 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-669544 image build -t localhost/my-image:functional-669544 testdata/build --alsologtostderr: (2.859272527s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-669544 image build -t localhost/my-image:functional-669544 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 979c3b66cef
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-669544
--> 39f76fd2d82
Successfully tagged localhost/my-image:functional-669544
39f76fd2d8251575e95a8b2f25e47ff69e467d1caf44fcdf7b9363673620da03
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-669544 image build -t localhost/my-image:functional-669544 testdata/build --alsologtostderr:
I0729 13:27:31.235024  992494 out.go:291] Setting OutFile to fd 1 ...
I0729 13:27:31.235133  992494 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 13:27:31.235143  992494 out.go:304] Setting ErrFile to fd 2...
I0729 13:27:31.235147  992494 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 13:27:31.235334  992494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
I0729 13:27:31.235933  992494 config.go:182] Loaded profile config "functional-669544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 13:27:31.236690  992494 config.go:182] Loaded profile config "functional-669544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 13:27:31.237219  992494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 13:27:31.237269  992494 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 13:27:31.252519  992494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39353
I0729 13:27:31.253048  992494 main.go:141] libmachine: () Calling .GetVersion
I0729 13:27:31.253648  992494 main.go:141] libmachine: Using API Version  1
I0729 13:27:31.253676  992494 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 13:27:31.254043  992494 main.go:141] libmachine: () Calling .GetMachineName
I0729 13:27:31.254242  992494 main.go:141] libmachine: (functional-669544) Calling .GetState
I0729 13:27:31.256221  992494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 13:27:31.256269  992494 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 13:27:31.270935  992494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33213
I0729 13:27:31.271356  992494 main.go:141] libmachine: () Calling .GetVersion
I0729 13:27:31.271918  992494 main.go:141] libmachine: Using API Version  1
I0729 13:27:31.271953  992494 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 13:27:31.272271  992494 main.go:141] libmachine: () Calling .GetMachineName
I0729 13:27:31.272480  992494 main.go:141] libmachine: (functional-669544) Calling .DriverName
I0729 13:27:31.272755  992494 ssh_runner.go:195] Run: systemctl --version
I0729 13:27:31.272786  992494 main.go:141] libmachine: (functional-669544) Calling .GetSSHHostname
I0729 13:27:31.275707  992494 main.go:141] libmachine: (functional-669544) DBG | domain functional-669544 has defined MAC address 52:54:00:45:d2:6f in network mk-functional-669544
I0729 13:27:31.276161  992494 main.go:141] libmachine: (functional-669544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:d2:6f", ip: ""} in network mk-functional-669544: {Iface:virbr1 ExpiryTime:2024-07-29 14:24:22 +0000 UTC Type:0 Mac:52:54:00:45:d2:6f Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:functional-669544 Clientid:01:52:54:00:45:d2:6f}
I0729 13:27:31.276195  992494 main.go:141] libmachine: (functional-669544) DBG | domain functional-669544 has defined IP address 192.168.50.142 and MAC address 52:54:00:45:d2:6f in network mk-functional-669544
I0729 13:27:31.276356  992494 main.go:141] libmachine: (functional-669544) Calling .GetSSHPort
I0729 13:27:31.276552  992494 main.go:141] libmachine: (functional-669544) Calling .GetSSHKeyPath
I0729 13:27:31.276703  992494 main.go:141] libmachine: (functional-669544) Calling .GetSSHUsername
I0729 13:27:31.276852  992494 sshutil.go:53] new ssh client: &{IP:192.168.50.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/functional-669544/id_rsa Username:docker}
I0729 13:27:31.406562  992494 build_images.go:161] Building image from path: /tmp/build.3933917701.tar
I0729 13:27:31.406629  992494 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0729 13:27:31.431128  992494 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3933917701.tar
I0729 13:27:31.443936  992494 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3933917701.tar: stat -c "%s %y" /var/lib/minikube/build/build.3933917701.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3933917701.tar': No such file or directory
I0729 13:27:31.443989  992494 ssh_runner.go:362] scp /tmp/build.3933917701.tar --> /var/lib/minikube/build/build.3933917701.tar (3072 bytes)
I0729 13:27:31.518326  992494 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3933917701
I0729 13:27:31.529807  992494 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3933917701 -xf /var/lib/minikube/build/build.3933917701.tar
I0729 13:27:31.559864  992494 crio.go:315] Building image: /var/lib/minikube/build/build.3933917701
I0729 13:27:31.559935  992494 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-669544 /var/lib/minikube/build/build.3933917701 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0729 13:27:33.994398  992494 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-669544 /var/lib/minikube/build/build.3933917701 --cgroup-manager=cgroupfs: (2.434430448s)
I0729 13:27:33.994476  992494 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3933917701
I0729 13:27:34.019849  992494 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3933917701.tar
I0729 13:27:34.044187  992494 build_images.go:217] Built localhost/my-image:functional-669544 from /tmp/build.3933917701.tar
I0729 13:27:34.044224  992494 build_images.go:133] succeeded building to: functional-669544
I0729 13:27:34.044230  992494 build_images.go:134] failed building to: 
I0729 13:27:34.044260  992494 main.go:141] libmachine: Making call to close driver server
I0729 13:27:34.044274  992494 main.go:141] libmachine: (functional-669544) Calling .Close
I0729 13:27:34.044577  992494 main.go:141] libmachine: Successfully made call to close driver server
I0729 13:27:34.044606  992494 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 13:27:34.044615  992494 main.go:141] libmachine: Making call to close driver server
I0729 13:27:34.044621  992494 main.go:141] libmachine: (functional-669544) Calling .Close
I0729 13:27:34.044867  992494 main.go:141] libmachine: Successfully made call to close driver server
I0729 13:27:34.044887  992494 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-669544
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 image load --daemon docker.io/kicbase/echo-server:functional-669544 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-669544 image load --daemon docker.io/kicbase/echo-server:functional-669544 --alsologtostderr: (1.981830669s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 image load --daemon docker.io/kicbase/echo-server:functional-669544 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-669544
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 image load --daemon docker.io/kicbase/echo-server:functional-669544 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 image save docker.io/kicbase/echo-server:functional-669544 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 image rm docker.io/kicbase/echo-server:functional-669544 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-669544 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (3.447318609s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-669544 /tmp/TestFunctionalparallelMountCmdspecific-port3471491565/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-669544 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (185.741123ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-669544 /tmp/TestFunctionalparallelMountCmdspecific-port3471491565/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-669544 ssh "sudo umount -f /mount-9p": exit status 1 (208.984092ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-669544 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-669544 /tmp/TestFunctionalparallelMountCmdspecific-port3471491565/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-669544 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4175372683/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-669544 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4175372683/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-669544 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4175372683/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-669544 ssh "findmnt -T" /mount1: exit status 1 (269.181966ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-669544 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-669544 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4175372683/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-669544 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4175372683/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-669544 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4175372683/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-669544
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 image save --daemon docker.io/kicbase/echo-server:functional-669544 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-669544
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-linux-amd64 -p functional-669544 service list -o json: (1.284943913s)
functional_test.go:1490: Took "1.28509285s" to run "out/minikube-linux-amd64 -p functional-669544 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.50.142:30123
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-669544 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.50.142:30123
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-669544
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-669544
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-669544
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (202.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-104111 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 13:29:30.662570  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
E0729 13:29:58.349884  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-104111 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m22.116382448s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (202.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-104111 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-104111 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-104111 -- rollout status deployment/busybox: (3.081270358s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-104111 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-104111 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-104111 -- exec busybox-fc5497c4f-7xsjn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-104111 -- exec busybox-fc5497c4f-cbdn4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-104111 -- exec busybox-fc5497c4f-sf8mb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-104111 -- exec busybox-fc5497c4f-7xsjn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-104111 -- exec busybox-fc5497c4f-cbdn4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-104111 -- exec busybox-fc5497c4f-sf8mb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-104111 -- exec busybox-fc5497c4f-7xsjn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-104111 -- exec busybox-fc5497c4f-cbdn4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-104111 -- exec busybox-fc5497c4f-sf8mb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-104111 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-104111 -- exec busybox-fc5497c4f-7xsjn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-104111 -- exec busybox-fc5497c4f-7xsjn -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-104111 -- exec busybox-fc5497c4f-cbdn4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-104111 -- exec busybox-fc5497c4f-cbdn4 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-104111 -- exec busybox-fc5497c4f-sf8mb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-104111 -- exec busybox-fc5497c4f-sf8mb -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-104111 -v=7 --alsologtostderr
E0729 13:32:06.666134  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
E0729 13:32:06.671400  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
E0729 13:32:06.681729  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
E0729 13:32:06.702026  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
E0729 13:32:06.742377  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
E0729 13:32:06.822696  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
E0729 13:32:06.983253  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
E0729 13:32:07.303699  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
E0729 13:32:07.944280  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
E0729 13:32:09.224612  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
E0729 13:32:11.785316  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-104111 -v=7 --alsologtostderr: (52.428426822s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-104111 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 cp testdata/cp-test.txt ha-104111:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 cp ha-104111:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3327814908/001/cp-test_ha-104111.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 cp ha-104111:/home/docker/cp-test.txt ha-104111-m02:/home/docker/cp-test_ha-104111_ha-104111-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111-m02 "sudo cat /home/docker/cp-test_ha-104111_ha-104111-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 cp ha-104111:/home/docker/cp-test.txt ha-104111-m03:/home/docker/cp-test_ha-104111_ha-104111-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111-m03 "sudo cat /home/docker/cp-test_ha-104111_ha-104111-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 cp ha-104111:/home/docker/cp-test.txt ha-104111-m04:/home/docker/cp-test_ha-104111_ha-104111-m04.txt
E0729 13:32:16.908331  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111-m04 "sudo cat /home/docker/cp-test_ha-104111_ha-104111-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 cp testdata/cp-test.txt ha-104111-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 cp ha-104111-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3327814908/001/cp-test_ha-104111-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 cp ha-104111-m02:/home/docker/cp-test.txt ha-104111:/home/docker/cp-test_ha-104111-m02_ha-104111.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111 "sudo cat /home/docker/cp-test_ha-104111-m02_ha-104111.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 cp ha-104111-m02:/home/docker/cp-test.txt ha-104111-m03:/home/docker/cp-test_ha-104111-m02_ha-104111-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111-m03 "sudo cat /home/docker/cp-test_ha-104111-m02_ha-104111-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 cp ha-104111-m02:/home/docker/cp-test.txt ha-104111-m04:/home/docker/cp-test_ha-104111-m02_ha-104111-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111-m04 "sudo cat /home/docker/cp-test_ha-104111-m02_ha-104111-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 cp testdata/cp-test.txt ha-104111-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 cp ha-104111-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3327814908/001/cp-test_ha-104111-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 cp ha-104111-m03:/home/docker/cp-test.txt ha-104111:/home/docker/cp-test_ha-104111-m03_ha-104111.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111 "sudo cat /home/docker/cp-test_ha-104111-m03_ha-104111.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 cp ha-104111-m03:/home/docker/cp-test.txt ha-104111-m02:/home/docker/cp-test_ha-104111-m03_ha-104111-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111-m02 "sudo cat /home/docker/cp-test_ha-104111-m03_ha-104111-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 cp ha-104111-m03:/home/docker/cp-test.txt ha-104111-m04:/home/docker/cp-test_ha-104111-m03_ha-104111-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111-m04 "sudo cat /home/docker/cp-test_ha-104111-m03_ha-104111-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 cp testdata/cp-test.txt ha-104111-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 cp ha-104111-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3327814908/001/cp-test_ha-104111-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 cp ha-104111-m04:/home/docker/cp-test.txt ha-104111:/home/docker/cp-test_ha-104111-m04_ha-104111.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111 "sudo cat /home/docker/cp-test_ha-104111-m04_ha-104111.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 cp ha-104111-m04:/home/docker/cp-test.txt ha-104111-m02:/home/docker/cp-test_ha-104111-m04_ha-104111-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111-m02 "sudo cat /home/docker/cp-test_ha-104111-m04_ha-104111-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 cp ha-104111-m04:/home/docker/cp-test.txt ha-104111-m03:/home/docker/cp-test_ha-104111-m04_ha-104111-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 ssh -n ha-104111-m03 "sudo cat /home/docker/cp-test_ha-104111-m04_ha-104111-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0729 13:34:50.511954  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.484327361s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-104111 node delete m03 -v=7 --alsologtostderr: (16.43785715s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (382.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-104111 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 13:47:06.667704  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
E0729 13:48:29.715257  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
E0729 13:49:30.662585  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-104111 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (6m21.791197789s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (382.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-104111 --control-plane -v=7 --alsologtostderr
E0729 13:52:06.665278  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-104111 --control-plane -v=7 --alsologtostderr: (1m16.407592044s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-104111 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                    
x
+
TestJSONOutput/start/Command (58.55s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-422436 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-422436 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (58.545053985s)
--- PASS: TestJSONOutput/start/Command (58.55s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-422436 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-422436 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-422436 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-422436 --output=json --user=testUser: (7.359284895s)
--- PASS: TestJSONOutput/stop/Command (7.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-626823 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-626823 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (62.324222ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3ce7c742-f98a-4d31-a52a-45f687e7e719","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-626823] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"26213a9b-70ed-4869-bca3-600b50060688","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19338"}}
	{"specversion":"1.0","id":"81f1b51c-c5e2-46c8-a564-9dc11d909878","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"df7acff1-2b52-4f61-a929-dd8730785148","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig"}}
	{"specversion":"1.0","id":"8610b7e0-1baa-4acb-86b7-f0a037273c72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube"}}
	{"specversion":"1.0","id":"5c20d61e-3bbb-4147-9aea-322bccabe688","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"50ab1673-1f8b-43fb-b6ac-e44e08870220","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"173cfc11-60fe-4e45-b225-258b264ccde8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-626823" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-626823
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (85.23s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-062242 --driver=kvm2  --container-runtime=crio
E0729 13:54:30.662764  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-062242 --driver=kvm2  --container-runtime=crio: (43.537816378s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-065388 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-065388 --driver=kvm2  --container-runtime=crio: (39.303282612s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-062242
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-065388
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-065388" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-065388
helpers_test.go:175: Cleaning up "first-062242" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-062242
--- PASS: TestMinikubeProfile (85.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.36s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-147892 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-147892 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.358314826s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-147892 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-147892 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-167488 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-167488 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.423870881s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-167488 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-167488 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.54s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-147892 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-167488 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-167488 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-167488
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-167488: (1.270638641s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.27s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-167488
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-167488: (20.27202903s)
--- PASS: TestMountStart/serial/RestartStopped (21.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-167488 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-167488 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (116.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-999945 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 13:57:06.667491  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
E0729 13:57:33.711292  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-999945 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m56.288934746s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (116.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999945 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999945 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-999945 -- rollout status deployment/busybox: (1.831789813s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999945 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999945 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999945 -- exec busybox-fc5497c4f-cfbps -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999945 -- exec busybox-fc5497c4f-wf4lj -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999945 -- exec busybox-fc5497c4f-cfbps -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999945 -- exec busybox-fc5497c4f-wf4lj -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999945 -- exec busybox-fc5497c4f-cfbps -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999945 -- exec busybox-fc5497c4f-wf4lj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.64s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999945 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999945 -- exec busybox-fc5497c4f-cfbps -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999945 -- exec busybox-fc5497c4f-cfbps -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999945 -- exec busybox-fc5497c4f-wf4lj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999945 -- exec busybox-fc5497c4f-wf4lj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (49.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-999945 -v 3 --alsologtostderr
E0729 13:59:30.662738  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-999945 -v 3 --alsologtostderr: (48.550804712s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (49.11s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-999945 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 cp testdata/cp-test.txt multinode-999945:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 ssh -n multinode-999945 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 cp multinode-999945:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2451309451/001/cp-test_multinode-999945.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 ssh -n multinode-999945 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 cp multinode-999945:/home/docker/cp-test.txt multinode-999945-m02:/home/docker/cp-test_multinode-999945_multinode-999945-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 ssh -n multinode-999945 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 ssh -n multinode-999945-m02 "sudo cat /home/docker/cp-test_multinode-999945_multinode-999945-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 cp multinode-999945:/home/docker/cp-test.txt multinode-999945-m03:/home/docker/cp-test_multinode-999945_multinode-999945-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 ssh -n multinode-999945 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 ssh -n multinode-999945-m03 "sudo cat /home/docker/cp-test_multinode-999945_multinode-999945-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 cp testdata/cp-test.txt multinode-999945-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 ssh -n multinode-999945-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 cp multinode-999945-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2451309451/001/cp-test_multinode-999945-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 ssh -n multinode-999945-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 cp multinode-999945-m02:/home/docker/cp-test.txt multinode-999945:/home/docker/cp-test_multinode-999945-m02_multinode-999945.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 ssh -n multinode-999945-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 ssh -n multinode-999945 "sudo cat /home/docker/cp-test_multinode-999945-m02_multinode-999945.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 cp multinode-999945-m02:/home/docker/cp-test.txt multinode-999945-m03:/home/docker/cp-test_multinode-999945-m02_multinode-999945-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 ssh -n multinode-999945-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 ssh -n multinode-999945-m03 "sudo cat /home/docker/cp-test_multinode-999945-m02_multinode-999945-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 cp testdata/cp-test.txt multinode-999945-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 ssh -n multinode-999945-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 cp multinode-999945-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2451309451/001/cp-test_multinode-999945-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 ssh -n multinode-999945-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 cp multinode-999945-m03:/home/docker/cp-test.txt multinode-999945:/home/docker/cp-test_multinode-999945-m03_multinode-999945.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 ssh -n multinode-999945-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 ssh -n multinode-999945 "sudo cat /home/docker/cp-test_multinode-999945-m03_multinode-999945.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 cp multinode-999945-m03:/home/docker/cp-test.txt multinode-999945-m02:/home/docker/cp-test_multinode-999945-m03_multinode-999945-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 ssh -n multinode-999945-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 ssh -n multinode-999945-m02 "sudo cat /home/docker/cp-test_multinode-999945-m03_multinode-999945-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-999945 node stop m03: (1.276144091s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-999945 status: exit status 7 (414.296816ms)

                                                
                                                
-- stdout --
	multinode-999945
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-999945-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-999945-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-999945 status --alsologtostderr: exit status 7 (415.228588ms)

                                                
                                                
-- stdout --
	multinode-999945
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-999945-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-999945-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:59:47.304009 1009826 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:59:47.304148 1009826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:59:47.304158 1009826 out.go:304] Setting ErrFile to fd 2...
	I0729 13:59:47.304165 1009826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:59:47.304353 1009826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 13:59:47.304565 1009826 out.go:298] Setting JSON to false
	I0729 13:59:47.304599 1009826 mustload.go:65] Loading cluster: multinode-999945
	I0729 13:59:47.304720 1009826 notify.go:220] Checking for updates...
	I0729 13:59:47.305006 1009826 config.go:182] Loaded profile config "multinode-999945": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 13:59:47.305027 1009826 status.go:255] checking status of multinode-999945 ...
	I0729 13:59:47.305438 1009826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:59:47.305509 1009826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:59:47.324633 1009826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43015
	I0729 13:59:47.324996 1009826 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:59:47.325522 1009826 main.go:141] libmachine: Using API Version  1
	I0729 13:59:47.325544 1009826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:59:47.325926 1009826 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:59:47.326123 1009826 main.go:141] libmachine: (multinode-999945) Calling .GetState
	I0729 13:59:47.327654 1009826 status.go:330] multinode-999945 host status = "Running" (err=<nil>)
	I0729 13:59:47.327673 1009826 host.go:66] Checking if "multinode-999945" exists ...
	I0729 13:59:47.328009 1009826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:59:47.328051 1009826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:59:47.343021 1009826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44089
	I0729 13:59:47.343402 1009826 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:59:47.343879 1009826 main.go:141] libmachine: Using API Version  1
	I0729 13:59:47.343912 1009826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:59:47.344213 1009826 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:59:47.344423 1009826 main.go:141] libmachine: (multinode-999945) Calling .GetIP
	I0729 13:59:47.346840 1009826 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 13:59:47.347221 1009826 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 13:59:47.347243 1009826 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 13:59:47.347366 1009826 host.go:66] Checking if "multinode-999945" exists ...
	I0729 13:59:47.347661 1009826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:59:47.347701 1009826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:59:47.362503 1009826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33713
	I0729 13:59:47.362888 1009826 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:59:47.363285 1009826 main.go:141] libmachine: Using API Version  1
	I0729 13:59:47.363304 1009826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:59:47.363585 1009826 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:59:47.363771 1009826 main.go:141] libmachine: (multinode-999945) Calling .DriverName
	I0729 13:59:47.363961 1009826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:59:47.363985 1009826 main.go:141] libmachine: (multinode-999945) Calling .GetSSHHostname
	I0729 13:59:47.366480 1009826 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 13:59:47.366927 1009826 main.go:141] libmachine: (multinode-999945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a6:ee", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:57:01 +0000 UTC Type:0 Mac:52:54:00:dd:a6:ee Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-999945 Clientid:01:52:54:00:dd:a6:ee}
	I0729 13:59:47.366956 1009826 main.go:141] libmachine: (multinode-999945) DBG | domain multinode-999945 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:a6:ee in network mk-multinode-999945
	I0729 13:59:47.367066 1009826 main.go:141] libmachine: (multinode-999945) Calling .GetSSHPort
	I0729 13:59:47.367240 1009826 main.go:141] libmachine: (multinode-999945) Calling .GetSSHKeyPath
	I0729 13:59:47.367395 1009826 main.go:141] libmachine: (multinode-999945) Calling .GetSSHUsername
	I0729 13:59:47.367528 1009826 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/multinode-999945/id_rsa Username:docker}
	I0729 13:59:47.451506 1009826 ssh_runner.go:195] Run: systemctl --version
	I0729 13:59:47.457383 1009826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:59:47.473437 1009826 kubeconfig.go:125] found "multinode-999945" server: "https://192.168.39.69:8443"
	I0729 13:59:47.473473 1009826 api_server.go:166] Checking apiserver status ...
	I0729 13:59:47.473529 1009826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:59:47.487187 1009826 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1124/cgroup
	W0729 13:59:47.496264 1009826 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1124/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:59:47.496308 1009826 ssh_runner.go:195] Run: ls
	I0729 13:59:47.500978 1009826 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8443/healthz ...
	I0729 13:59:47.505170 1009826 api_server.go:279] https://192.168.39.69:8443/healthz returned 200:
	ok
	I0729 13:59:47.505193 1009826 status.go:422] multinode-999945 apiserver status = Running (err=<nil>)
	I0729 13:59:47.505203 1009826 status.go:257] multinode-999945 status: &{Name:multinode-999945 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:59:47.505221 1009826 status.go:255] checking status of multinode-999945-m02 ...
	I0729 13:59:47.505636 1009826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:59:47.505680 1009826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:59:47.521911 1009826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34635
	I0729 13:59:47.522349 1009826 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:59:47.522857 1009826 main.go:141] libmachine: Using API Version  1
	I0729 13:59:47.522877 1009826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:59:47.523179 1009826 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:59:47.523437 1009826 main.go:141] libmachine: (multinode-999945-m02) Calling .GetState
	I0729 13:59:47.525111 1009826 status.go:330] multinode-999945-m02 host status = "Running" (err=<nil>)
	I0729 13:59:47.525134 1009826 host.go:66] Checking if "multinode-999945-m02" exists ...
	I0729 13:59:47.525416 1009826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:59:47.525449 1009826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:59:47.540734 1009826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36283
	I0729 13:59:47.541188 1009826 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:59:47.541658 1009826 main.go:141] libmachine: Using API Version  1
	I0729 13:59:47.541683 1009826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:59:47.541965 1009826 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:59:47.542148 1009826 main.go:141] libmachine: (multinode-999945-m02) Calling .GetIP
	I0729 13:59:47.544829 1009826 main.go:141] libmachine: (multinode-999945-m02) DBG | domain multinode-999945-m02 has defined MAC address 52:54:00:52:02:d6 in network mk-multinode-999945
	I0729 13:59:47.545198 1009826 main.go:141] libmachine: (multinode-999945-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:02:d6", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:09 +0000 UTC Type:0 Mac:52:54:00:52:02:d6 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-999945-m02 Clientid:01:52:54:00:52:02:d6}
	I0729 13:59:47.545225 1009826 main.go:141] libmachine: (multinode-999945-m02) DBG | domain multinode-999945-m02 has defined IP address 192.168.39.130 and MAC address 52:54:00:52:02:d6 in network mk-multinode-999945
	I0729 13:59:47.545357 1009826 host.go:66] Checking if "multinode-999945-m02" exists ...
	I0729 13:59:47.545724 1009826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:59:47.545778 1009826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:59:47.560793 1009826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45179
	I0729 13:59:47.561182 1009826 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:59:47.561623 1009826 main.go:141] libmachine: Using API Version  1
	I0729 13:59:47.561646 1009826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:59:47.561939 1009826 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:59:47.562121 1009826 main.go:141] libmachine: (multinode-999945-m02) Calling .DriverName
	I0729 13:59:47.562279 1009826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:59:47.562300 1009826 main.go:141] libmachine: (multinode-999945-m02) Calling .GetSSHHostname
	I0729 13:59:47.565191 1009826 main.go:141] libmachine: (multinode-999945-m02) DBG | domain multinode-999945-m02 has defined MAC address 52:54:00:52:02:d6 in network mk-multinode-999945
	I0729 13:59:47.565657 1009826 main.go:141] libmachine: (multinode-999945-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:02:d6", ip: ""} in network mk-multinode-999945: {Iface:virbr1 ExpiryTime:2024-07-29 14:58:09 +0000 UTC Type:0 Mac:52:54:00:52:02:d6 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-999945-m02 Clientid:01:52:54:00:52:02:d6}
	I0729 13:59:47.565702 1009826 main.go:141] libmachine: (multinode-999945-m02) DBG | domain multinode-999945-m02 has defined IP address 192.168.39.130 and MAC address 52:54:00:52:02:d6 in network mk-multinode-999945
	I0729 13:59:47.565851 1009826 main.go:141] libmachine: (multinode-999945-m02) Calling .GetSSHPort
	I0729 13:59:47.566057 1009826 main.go:141] libmachine: (multinode-999945-m02) Calling .GetSSHKeyPath
	I0729 13:59:47.566201 1009826 main.go:141] libmachine: (multinode-999945-m02) Calling .GetSSHUsername
	I0729 13:59:47.566375 1009826 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-974764/.minikube/machines/multinode-999945-m02/id_rsa Username:docker}
	I0729 13:59:47.643260 1009826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:59:47.657476 1009826 status.go:257] multinode-999945-m02 status: &{Name:multinode-999945-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:59:47.657511 1009826 status.go:255] checking status of multinode-999945-m03 ...
	I0729 13:59:47.657826 1009826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 13:59:47.657861 1009826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:59:47.674169 1009826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39727
	I0729 13:59:47.674613 1009826 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:59:47.675047 1009826 main.go:141] libmachine: Using API Version  1
	I0729 13:59:47.675068 1009826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:59:47.675382 1009826 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:59:47.675588 1009826 main.go:141] libmachine: (multinode-999945-m03) Calling .GetState
	I0729 13:59:47.677087 1009826 status.go:330] multinode-999945-m03 host status = "Stopped" (err=<nil>)
	I0729 13:59:47.677102 1009826 status.go:343] host is not running, skipping remaining checks
	I0729 13:59:47.677109 1009826 status.go:257] multinode-999945-m03 status: &{Name:multinode-999945-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.11s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-999945 node start m03 -v=7 --alsologtostderr: (36.112754499s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-999945 node delete m03: (1.558017084s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (181.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-999945 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 14:09:30.665305  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-999945 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m0.58605774s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999945 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (181.12s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-999945
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-999945-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-999945-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (65.190485ms)

                                                
                                                
-- stdout --
	* [multinode-999945-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19338
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-999945-m02' is duplicated with machine name 'multinode-999945-m02' in profile 'multinode-999945'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-999945-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-999945-m03 --driver=kvm2  --container-runtime=crio: (43.530438534s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-999945
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-999945: exit status 80 (209.148802ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-999945 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-999945-m03 already exists in multinode-999945-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-999945-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.72s)

                                                
                                    
x
+
TestScheduledStopUnix (109.27s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-715728 --memory=2048 --driver=kvm2  --container-runtime=crio
E0729 14:17:06.665432  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-715728 --memory=2048 --driver=kvm2  --container-runtime=crio: (37.664190432s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-715728 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-715728 -n scheduled-stop-715728
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-715728 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-715728 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-715728 -n scheduled-stop-715728
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-715728
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-715728 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-715728
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-715728: exit status 7 (72.137958ms)

                                                
                                                
-- stdout --
	scheduled-stop-715728
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-715728 -n scheduled-stop-715728
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-715728 -n scheduled-stop-715728: exit status 7 (64.779195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-715728" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-715728
--- PASS: TestScheduledStopUnix (109.27s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (174.16s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1367850160 start -p running-upgrade-932740 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1367850160 start -p running-upgrade-932740 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m27.984821479s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-932740 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-932740 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m24.711214518s)
helpers_test.go:175: Cleaning up "running-upgrade-932740" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-932740
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-932740: (1.085861469s)
--- PASS: TestRunningBinaryUpgrade (174.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-721916 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-721916 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (89.67961ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-721916] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19338
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (93.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-721916 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-721916 --driver=kvm2  --container-runtime=crio: (1m32.787391957s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-721916 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (93.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-513289 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-513289 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (111.964984ms)

                                                
                                                
-- stdout --
	* [false-513289] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19338
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 14:18:27.098715 1017723 out.go:291] Setting OutFile to fd 1 ...
	I0729 14:18:27.098844 1017723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:18:27.098863 1017723 out.go:304] Setting ErrFile to fd 2...
	I0729 14:18:27.098870 1017723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 14:18:27.099068 1017723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-974764/.minikube/bin
	I0729 14:18:27.099749 1017723 out.go:298] Setting JSON to false
	I0729 14:18:27.100870 1017723 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":14459,"bootTime":1722248248,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 14:18:27.100949 1017723 start.go:139] virtualization: kvm guest
	I0729 14:18:27.103541 1017723 out.go:177] * [false-513289] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 14:18:27.105174 1017723 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 14:18:27.105220 1017723 notify.go:220] Checking for updates...
	I0729 14:18:27.108342 1017723 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 14:18:27.109953 1017723 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-974764/kubeconfig
	I0729 14:18:27.112061 1017723 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-974764/.minikube
	I0729 14:18:27.113677 1017723 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 14:18:27.115485 1017723 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 14:18:27.117775 1017723 config.go:182] Loaded profile config "NoKubernetes-721916": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:18:27.117871 1017723 config.go:182] Loaded profile config "force-systemd-env-764732": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:18:27.117984 1017723 config.go:182] Loaded profile config "offline-crio-715623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 14:18:27.118087 1017723 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 14:18:27.154072 1017723 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 14:18:27.155464 1017723 start.go:297] selected driver: kvm2
	I0729 14:18:27.155480 1017723 start.go:901] validating driver "kvm2" against <nil>
	I0729 14:18:27.155503 1017723 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 14:18:27.157570 1017723 out.go:177] 
	W0729 14:18:27.158946 1017723 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0729 14:18:27.160390 1017723 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-513289 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-513289

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-513289

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-513289

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-513289

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-513289

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-513289

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-513289

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-513289

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-513289

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-513289

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-513289

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-513289" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-513289" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-513289

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513289"

                                                
                                                
----------------------- debugLogs end: false-513289 [took: 2.714050226s] --------------------------------
helpers_test.go:175: Cleaning up "false-513289" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-513289
--- PASS: TestNetworkPlugins/group/false (2.97s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (146.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1448829078 start -p stopped-upgrade-626874 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0729 14:19:30.662794  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1448829078 start -p stopped-upgrade-626874 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m42.367171311s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1448829078 -p stopped-upgrade-626874 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1448829078 -p stopped-upgrade-626874 stop: (2.144958973s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-626874 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-626874 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.572546246s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (146.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (58.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-721916 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-721916 --no-kubernetes --driver=kvm2  --container-runtime=crio: (57.840616229s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-721916 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-721916 status -o json: exit status 2 (246.76499ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-721916","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-721916
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (58.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (41.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-721916 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-721916 --no-kubernetes --driver=kvm2  --container-runtime=crio: (41.272630352s)
--- PASS: TestNoKubernetes/serial/Start (41.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-626874
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-721916 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-721916 "sudo systemctl is-active --quiet service kubelet": exit status 1 (199.160947ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.48s)

                                                
                                    
x
+
TestPause/serial/Start (97.84s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-414966 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-414966 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m37.835939598s)
--- PASS: TestPause/serial/Start (97.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-721916
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-721916: (1.283609732s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (44.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-721916 --driver=kvm2  --container-runtime=crio
E0729 14:21:49.718746  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
E0729 14:22:06.665650  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-721916 --driver=kvm2  --container-runtime=crio: (44.935877449s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (44.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-721916 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-721916 "sudo systemctl is-active --quiet service kubelet": exit status 1 (193.865262ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (79.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-513289 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-513289 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m19.741160053s)
--- PASS: TestNetworkPlugins/group/auto/Start (79.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (107.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-513289 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0729 14:24:30.662990  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-513289 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m47.15338275s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (107.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-513289 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-513289 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-zrrk9" [630bdc11-b58d-45e7-9e18-1a6696e41ae5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-zrrk9" [630bdc11-b58d-45e7-9e18-1a6696e41ae5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.003997825s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-513289 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-513289 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-513289 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (83.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-513289 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-513289 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m23.336411665s)
--- PASS: TestNetworkPlugins/group/calico/Start (83.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-lkpkk" [5d0e6a7b-ff10-41f5-89b2-a1d12cc91311] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005342558s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-513289 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-513289 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-lqlzb" [dd857adb-5d65-4a75-bd08-a15bc09977d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-lqlzb" [dd857adb-5d65-4a75-bd08-a15bc09977d8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004713823s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (101.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-513289 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-513289 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m41.58246459s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (101.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-513289 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-513289 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-513289 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (84.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-513289 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-513289 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m24.056870222s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (84.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (124.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-513289 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0729 14:27:06.665459  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/functional-669544/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-513289 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m4.567012628s)
--- PASS: TestNetworkPlugins/group/flannel/Start (124.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-jl7hq" [48df22d0-937c-425e-a0ea-17bd62f2b719] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00496382s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-513289 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-513289 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-ncthl" [085527f3-865c-4468-899f-b0f90535c81f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-ncthl" [085527f3-865c-4468-899f-b0f90535c81f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.005526172s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-513289 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-513289 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-513289 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (61.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-513289 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-513289 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m1.97623499s)
--- PASS: TestNetworkPlugins/group/bridge/Start (61.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-513289 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-513289 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-64hvw" [418e1eda-fbf9-46db-8346-e1ed5e0be546] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-64hvw" [418e1eda-fbf9-46db-8346-e1ed5e0be546] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.006266293s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-513289 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-513289 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-224bp" [fbafc66e-5805-460e-9d0a-e31f0af5d618] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-224bp" [fbafc66e-5805-460e-9d0a-e31f0af5d618] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004961761s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-513289 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-513289 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-513289 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-513289 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-513289 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-513289 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (96.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-603534 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-603534 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (1m36.105317398s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (96.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-c78gr" [ef5dce70-f17d-4325-85e8-a630ae5ee75a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006511572s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-513289 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (14.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-513289 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-xwdk5" [29014776-20b0-4ae7-830c-ff5a3984222f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-xwdk5" [29014776-20b0-4ae7-830c-ff5a3984222f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 14.003884597s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (14.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-513289 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-513289 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-qfl7p" [4a978d17-4480-40c9-949e-90a75804b939] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-qfl7p" [4a978d17-4480-40c9-949e-90a75804b939] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.004941182s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-513289 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-513289 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-513289 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-513289 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-513289 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-513289 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)
E0729 14:58:55.651418  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/flannel-513289/client.crt: no such file or directory
E0729 14:59:00.040459  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/bridge-513289/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (99.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-668123 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-668123 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m39.798603028s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (99.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (116.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-751306 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-751306 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m56.86065062s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (116.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-603534 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d0343c62-55d5-4466-98d0-afb524f3a1c2] Pending
helpers_test.go:344: "busybox" [d0343c62-55d5-4466-98d0-afb524f3a1c2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d0343c62-55d5-4466-98d0-afb524f3a1c2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005501916s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-603534 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-603534 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-603534 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-668123 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [edbc7100-5ac6-4390-98cf-b25430811079] Pending
E0729 14:31:10.797601  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kindnet-513289/client.crt: no such file or directory
E0729 14:31:11.438460  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kindnet-513289/client.crt: no such file or directory
helpers_test.go:344: "busybox" [edbc7100-5ac6-4390-98cf-b25430811079] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0729 14:31:12.558118  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/auto-513289/client.crt: no such file or directory
E0729 14:31:12.719581  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kindnet-513289/client.crt: no such file or directory
helpers_test.go:344: "busybox" [edbc7100-5ac6-4390-98cf-b25430811079] Running
E0729 14:31:15.279996  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kindnet-513289/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003946709s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-668123 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-668123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-668123 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-751306 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [315fded0-f75e-4418-8cd1-67ec95acd643] Pending
helpers_test.go:344: "busybox" [315fded0-f75e-4418-8cd1-67ec95acd643] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [315fded0-f75e-4418-8cd1-67ec95acd643] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004081644s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-751306 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-751306 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-751306 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (684.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-603534 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0729 14:32:59.776248  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/custom-flannel-513289/client.crt: no such file or directory
E0729 14:32:59.781477  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/custom-flannel-513289/client.crt: no such file or directory
E0729 14:32:59.791777  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/custom-flannel-513289/client.crt: no such file or directory
E0729 14:32:59.812038  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/custom-flannel-513289/client.crt: no such file or directory
E0729 14:32:59.852383  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/custom-flannel-513289/client.crt: no such file or directory
E0729 14:32:59.932772  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/custom-flannel-513289/client.crt: no such file or directory
E0729 14:33:00.093337  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/custom-flannel-513289/client.crt: no such file or directory
E0729 14:33:00.414318  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/custom-flannel-513289/client.crt: no such file or directory
E0729 14:33:01.055368  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/custom-flannel-513289/client.crt: no such file or directory
E0729 14:33:02.336266  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/custom-flannel-513289/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-603534 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (11m23.861568657s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-603534 -n no-preload-603534
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (684.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (522.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-668123 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0729 14:33:54.003257  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/kindnet-513289/client.crt: no such file or directory
E0729 14:33:55.652247  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/flannel-513289/client.crt: no such file or directory
E0729 14:33:55.657559  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/flannel-513289/client.crt: no such file or directory
E0729 14:33:55.667847  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/flannel-513289/client.crt: no such file or directory
E0729 14:33:55.688131  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/flannel-513289/client.crt: no such file or directory
E0729 14:33:55.728480  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/flannel-513289/client.crt: no such file or directory
E0729 14:33:55.808860  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/flannel-513289/client.crt: no such file or directory
E0729 14:33:55.969363  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/flannel-513289/client.crt: no such file or directory
E0729 14:33:56.290002  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/flannel-513289/client.crt: no such file or directory
E0729 14:33:56.930158  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/flannel-513289/client.crt: no such file or directory
E0729 14:33:58.210697  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/flannel-513289/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-668123 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (8m42.557354414s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-668123 -n embed-certs-668123
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (522.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (566.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-751306 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0729 14:34:16.132713  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/flannel-513289/client.crt: no such file or directory
E0729 14:34:20.523049  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/bridge-513289/client.crt: no such file or directory
E0729 14:34:21.699724  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/custom-flannel-513289/client.crt: no such file or directory
E0729 14:34:30.662265  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
E0729 14:34:30.804530  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/enable-default-cni-513289/client.crt: no such file or directory
E0729 14:34:36.613451  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/flannel-513289/client.crt: no such file or directory
E0729 14:34:41.003908  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/bridge-513289/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-751306 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (9m26.357969235s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-751306 -n default-k8s-diff-port-751306
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (566.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-360866 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-360866 --alsologtostderr -v=3: (2.290921861s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-360866 -n old-k8s-version-360866
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-360866 -n old-k8s-version-360866: exit status 7 (65.266666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-360866 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-342058 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-342058 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (46.355612008s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-342058 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-342058 --alsologtostderr -v=3
E0729 14:59:30.662456  982046 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-974764/.minikube/profiles/addons-881745/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-342058 --alsologtostderr -v=3: (11.550982557s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-342058 -n newest-cni-342058
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-342058 -n newest-cni-342058: exit status 7 (62.989611ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-342058 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (71.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-342058 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-342058 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (1m10.736062367s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-342058 -n newest-cni-342058
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (71.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-342058 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-342058 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-342058 -n newest-cni-342058
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-342058 -n newest-cni-342058: exit status 2 (232.268292ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-342058 -n newest-cni-342058
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-342058 -n newest-cni-342058: exit status 2 (232.126248ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-342058 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-342058 -n newest-cni-342058
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-342058 -n newest-cni-342058
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.30s)

                                                
                                    

Test skip (40/320)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.31.0-beta.0/binaries 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0
47 TestAddons/parallel/Olm 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
129 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
185 TestImageBuild 0
212 TestKicCustomNetwork 0
213 TestKicExistingNetwork 0
214 TestKicCustomSubnet 0
215 TestKicStaticIP 0
247 TestChangeNoneUser 0
250 TestScheduledStopWindows 0
252 TestSkaffold 0
254 TestInsufficientStorage 0
258 TestMissingContainerUpgrade 0
261 TestNetworkPlugins/group/kubenet 2.99
272 TestNetworkPlugins/group/cilium 3.34
285 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-513289 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-513289

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-513289

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-513289

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-513289

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-513289

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-513289

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-513289

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-513289

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-513289

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-513289

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-513289

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-513289" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-513289" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-513289

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513289"

                                                
                                                
----------------------- debugLogs end: kubenet-513289 [took: 2.853129265s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-513289" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-513289
--- SKIP: TestNetworkPlugins/group/kubenet (2.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-513289 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-513289

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-513289

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-513289

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-513289

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-513289

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-513289

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-513289

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-513289

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-513289

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-513289

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-513289

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-513289" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-513289

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-513289

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-513289

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-513289

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-513289" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-513289" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-513289

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-513289" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513289"

                                                
                                                
----------------------- debugLogs end: cilium-513289 [took: 3.186002594s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-513289" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-513289
--- SKIP: TestNetworkPlugins/group/cilium (3.34s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-054967" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-054967
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard